[JAVA-2123] Spurious exception logged while ensuring minimum pool size Created: 25/Feb/16 Updated: 19/Oct/16 Resolved: 18/Mar/16 |
|
| Status: | Closed |
| Project: | Java Driver |
| Component/s: | Configuration |
| Affects Version/s: | 3.2.0 |
| Fix Version/s: | 3.3.0 |
| Type: | Bug | Priority: | Minor - P4 |
| Reporter: | Alex Paransky | Assignee: | Jeffrey Yemin |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||
| Description |
|
I got the following error when updated to java mongo driver 3.2.2:
Here is how I create MongoClientOptions:
If I comment out:
The exception goes away. Tested to work without issues on: But switching to 3.2.1 or 3.2.2 causes the exception to be raised. |
| Comments |
| Comment by Githook User [ 18/Mar/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
Author: {u'username': u'jyemin', u'name': u'Jeff Yemin', u'email': u'jeff.yemin@10gen.com'}Message: | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Alex Paransky [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
Perhaps, in this instance, even changing the log from WARN to DEBUG, if in fact these types of exceptions are benign... Thanks for the explanation. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Jeffrey Yemin [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
No, it won't break the population of the pool. What's happening here is this:
What I propose to do is add a more specific catch clause for MongoInterruptedException and not log that particular exception. I'll also consider delaying the start of the background task until the driver is sure that this is the server that it actually wants to be connected to. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Alex Paransky [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
The real question, is the pool really pre-initialized with the # of connection that I expect when the exception occurs? In other words, if I set connectionsPerHost = minConnectionsPerHost = 100, will the pool contain 100 connections after being initialized despite of this exception? If it does, then I agree, MINOR and disable the log in the future since there is no "harm". On the other hand, if this exception breaks the population of the pool such that the pool does not contain 100 connections as requested by the MongoClientOptions, perhaps a different solution is needed. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Jeffrey Yemin [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
You can use DNS for the names you see in the seed list, but the driver will re-connect to all the members using the server names reported by the ismaster command. The reason this exception is generated and logged is a bit complicated, but I'm now convinced that it's harmless and that the driver can safely suppress it, since the driver is the one doing the interrupting of this background thread. This is essentially a race condition that has always existed but was made much more common to hit in 3.2 when we fixed I'm going to call this a minor bug and use it as the impetus to suppress the log message. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Alex Paransky [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
You are absolutely correct. These ARE in a 3 node replica set. We have all the nodes in the DNS, so when trying to connect, we specify the DNS name. When using the DNS name, I see the exception. When using the "exact" replica name as it appears from rs.status(), I don't see the exception. As I mentioned, this exception does not really reach the client. However, since it appears in the WARN, our OPS guys get a bit jumpy. This does not appear to be happening in drivers prior to 3.2. Is using DNS for the servers not a supported configuration? I kinda wanted to avoid having to modify the configurations of all our clients when we add/remove nodes from the replica set. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Jeffrey Yemin [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
Is this connecting to a member of a replica set? If so, does "serverInAWS" match the hostname used in the replica set configuration (rs.status() in the shell)? If it doesn't, try changing it so that it does and let's see if that gets rid of the log message. If the issue is what I suspect, I agree with your assessment that this log message is harmless to the application. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Alex Paransky [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
Strange, here are various snippets that I have tried:
As written above, I get the exception, but only when I try to connect to our server in AWS using List<ServerAddress> parameter. So, as written it fails. However, I replace our AWS server with localhost, it works with all options. Also if I don't use the List<ServerAddress> parameter, it also works with our AWS server. This feels a bit odd. Here is the full exception stack trace:
The exception is only a WARNING and does not appear to float up to the client. So, my client does not actually see any issues, it's only logged to the LOG. | |||||||||||||||||||||||||||||||||||||||||||||
| Comment by Jeffrey Yemin [ 25/Feb/16 ] | |||||||||||||||||||||||||||||||||||||||||||||
|
Do you have a test program that reproduces this? The simple one I tried did not:
|