[JAVA-2249] Number of connections over max Created: 15/Jul/16  Updated: 20/Mar/19  Resolved: 26/Nov/18

Status: Closed
Project: Java Driver
Component/s: Connection Management
Affects Version/s: 3.0.4
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Ricardo Ferreira Assignee: Unassigned
Resolution: Cannot Reproduce Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File mongoConnectionPool.png    
Case:

 Description   

The checkedOutCount is bigger then the number of max connections.

The used connection never goes below 532 even the system is not in use.



 Comments   
Comment by Ian Whalen (Inactive) [ 26/Nov/18 ]

Hey all, unfortunately we've been unable to reproduce this so far so we're resolving as Cannot Repro. Please do comment with any additional info or a reproducer if you are able to and we will reopen and continue working on it.

Comment by Oleg Rekutin [ 09/May/17 ]

Still noticing this with 3.3.0. Seeing CheckedOutCount exceed current pool size.

Comment by Jeffrey Yemin [ 19/Oct/16 ]

Thanks for letting us know that 3.3.0 does not exhibit the issue reported here. I'm going to close this now but will re-open if it crops up again for you or anyone else.

Comment by Ricardo Ferreira [ 19/Oct/16 ]

Hi,

We updated to Mongo Driver 3.3.0 and it seems the problem is solved.
Our production environment is running since early September (about 1,5 months) and the number of connections/thread are stable. The number of connections increase and decrease according the load of the system.

Thanks for the following up,
Ricardo

Comment by Ross Lawley [ 10/Aug/16 ]

Hi rjferreira,

I agree with Jeffs comment that:

It's quite possible that this is an accounting bug with the JMX integration, and not a bug in the connection pooling itself.

My working theory is that the size statistic for the connection pool is correct but the number of checkedOutConnections is incorrect as there is some path that is bypassing the checking in of those connections. This is something that may take some time to debug but can be investigated to see if the theory holds true.

Regarding your latest comment:

After a period of time the number of connections start to grow up.
We have about 300 connections and about 280 threads like this

How are you measuring those connections / threads in the waiting state? You mention previously about changes to the replicaSet, have you observed any correlation between the health of the replicaset and the number of connections?

There have been fixes regarding the connection pool since 3.0.4. For example JAVA-2238, it may not apply to your scenario however, I would be interested if you can replicate the issue on the latest version of the driver: 3.3.0

Ross

Comment by Ricardo Ferreira [ 09/Aug/16 ]

We restarted our server in production and the results are all the same.
After a period of time the number of connections start to grow up.

We have about 300 connections and about 280 threads like this

cluster-ClusterId{value='57a28429d9880974901bf3c2', description='null'}-10.162.224.245:27017
State: TIMED_WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6abd6448
Total blocked: 1.039  Total waited: 31.666
 
Stack trace: 
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForSignalOrTimeout(DefaultServerMonitor.java:237)
com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.waitForNext(DefaultServerMonitor.java:218)
com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:167)
   - locked com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable@2dd342fc
java.lang.Thread.run(Thread.java:745)

Do you have any idea how to solve this problem?

Comment by Ricardo Ferreira [ 20/Jul/16 ]

Let me give you some tips.

  • Have you tried the scenario witch have a reelection, the secondary becomes primary?
  • Have you tested a network partition? Imagine the client sees the all members but we have a election because the primary wasn't able to see others members.

We don't now exactly what happened but we suspected we had problems in the replica set.

Comment by Jeffrey Yemin [ 18/Jul/16 ]

This will need more research, as I'm not able to reproduce this with a simple test program:

       
        MongoClient client = new MongoClient();
        MongoDatabase database = client.getDatabase("admin");
        Document ping = new Document("ping" , 1);
 
        ExecutorService service = Executors.newFixedThreadPool(200);
 
        for (int i = 0; i < 200; i++) {
            service.submit((Runnable) () -> {
                for (;;) {
                    database.runCommand(ping);
                }
            });
        }
 
        Thread.sleep(Long.MAX_VALUE);}

Comment by Ricardo Ferreira [ 15/Jul/16 ]

We have a mongo in a replica set with three members (primary, secondary, arbiter)
The application is multi-threaded server.
We didn't find any error in the logs.

Here the MongoClient creation:

List<ServerAddress> seeds = new ArrayList<>();
seeds.add(new ServerAddress("10.162.224.242",27017));
seeds.add(new ServerAddress("10.162.224.243",27017));
MongoClient mongoClient = new MongoClient(seeds, new LinkedList<com.mongodb.MongoCredential>(), new MongoClientOptions.Builder().build());

Here the connections status in the primary

replicaset:PRIMARY> db.serverStatus()["connections"]
{
        "current" : 802,
        "available" : 208913,
        "totalCreated" : NumberLong(713603)
}

Comment by Jeffrey Yemin [ 15/Jul/16 ]

Hi Ricardo,

Thanks for the report. To help us reproduce this:

  • please supply any connection pool-related options that the application uses to construct its MongoClient.
  • please supply any relevant information about the application, e.g.
    • is it connected to a replica set or to a sharded cluster
    • are there any exceptions being thrown by the driver
    • is it multi-threaded or single-threaded

Do you see any evidence that there are actually 532 connections open to the server? It's quite possible that this is an accounting bug with the JMX integration, and not a bug in the connection pooling itself. You can track the number of connections open to the server in the shell like this:

db.serverStatus()["connections"]

Generated at Thu Feb 08 08:56:43 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.