|
Sir, Almost all the websites and all the StackOverflow sites didn't give any solution yet, you saved my days and with simple solution, As you suggested reducing the maxConnectionIdleTime solved socket exception : org.springframework.data.mongodb.UncategorizedMongoDbException: Exception receiving message; nested exception is com.mongodb.MongoSocketReadException:. once again thankyou so much sir, Love you
|
|
Ah ok,
I can consistently trigger the socket exception. After the second set of
2017-08-25 17:05:21 DEBUG connection:56 - Pruning pooled connections to {SERVER-02}:27017
|
2017-08-25 17:05:23 DEBUG connection:56 - Pruning pooled connections to {SERVER-00}:27017
|
2017-08-25 17:05:23 DEBUG connection:56 - Pruning pooled connections to {SERVER-01}:27017
|
so he maintenance has run twice, the first request after it gets the socketException due to the fact that the connectionpool has been killed.
Can you have a look at the log.txt and see if its related to this issue or complete something else.
Many thanks.
Regards, Jork
|
|
Hi jork_zijlstra
That message you're seeing is logged every time the pruning thread wakes up. It doesn't mean it actually found any connections to prune.
When a connection is actually pruned, you'll see an info log message like this:
2017-08-25 14:41:48 INFO connection:56 - Closed connection ... to {server-00}:27017 because it is past its maximum allowed idle time.
|
|
|
Hi jeff.yemin
"but it won't actually close them unless the idle time has been reached" if this is the case and I use the settings:
maxConnectionIdleTime = 4 minutes
maintenanceFrequencyMS = default (1 minute, so I'm not setting it)
I'm only closing the MongoClient on an applicationstop so why I'm I seeing every minute in the logs the following:
2017-08-25 14:40:48 DEBUG connection:56 - Pruning pooled connections to {server-00}:27017
|
2017-08-25 14:40:50 DEBUG connection:56 - Pruning pooled connections to {server-01}:27017
|
2017-08-25 14:40:50 DEBUG connection:56 - Pruning pooled connections to {server-02}:27017
|
|
....
|
2017-08-25 14:41:48 DEBUG connection:56 - Pruning pooled connections to {server-00}:27017
|
2017-08-25 14:41:50 DEBUG connection:56 - Pruning pooled connections to {server-01}:27017
|
2017-08-25 14:41:50 DEBUG connection:56 - Pruning pooled connections to {server-02}:27017
|
Updating the maintenanceFrequencyMS to 2 minutes also triggers the log entries to be 2 minutes apart. So my connections are actually pruned with each maintenance cycle even though the maxConnectionIdleTime hasn't been reached.
|
|
jork_zijlstra
There is no implicit rule that maxConnectionIdleTime < maintenanceFrequencyMS. The maintenance frequency is used to control how often a background thread checks for idle connections in order to close them, but it won't actually close them unless the idle time has been reached. The algorithm is essentially this:
while (true)
|
sleep for maintenance frequency
|
for conn in pool
|
if conn idle timeout reached
|
close conn
|
In addition, connections are checked for idle timeout as they are checked out of the pool. Given this, I'm not sure why your application is experiencing socket exceptions.
|
|
Hi jeff.yemin, thanks for the input.
The idleTimeout setting wasn't working for me but I think I found out what the issue is. I had the maxConnectionIdleTime in the application set to 4 minutes, but I saw in the logs that the connections are pruned every minute. So between 1 and 4 minutes I would still get the com.mongodb.MongoSocketReadException. If I put the maxConnectionIdleTime below 1 minute it seemed to be working.
So I had a look at the code and found out that there is another setting called `maintenanceFrequencyMS` which is set by default to 1 minute.
private long maintenanceFrequencyMS = MILLISECONDS.convert(1, MINUTES);
|
This setting is the cause that my connections are pruned every minute. So it seems that you must apply the rule of maxConnectionIdleTime < maintenanceFrequencyMS or there will be a window in which SocketExceptions can occur.
I haven't been able to find a reference to this setting in the docs and am wondering why there is the implicit rule of maxConnectionIdleTime < maintenanceFrequencyMS?
Regards, Jork
|
|
jork_zijlstra
The fix was in the Atlas free tier proxy itself, and it was simply to stop closing idle connections.
|
|
Hi,
I took me a while to work with MongoDb and the scala driver again, since I switched company. In my new company we have been using our own MongoDB cluster. Using the Scala driver 2.1.0 with this cluster (without SSL, with credentials) also caused this error:
2017-08-22 09:58:59 TRACE connection:41 - Processing unknown failed message: 110. ([connectionId{localValue:4, serverValue:216362}] ServerId{clusterId=ClusterId{value='599be328d0c4767840043b82', description='null'}, address={MONGO-SERVER-00}:27017})
|
2017-08-22 09:58:59 WARN connection:86 - Got socket exception on connection [connectionId{localValue:4, serverValue:216362}] to {MONGO-SERVER-00}:27017. All connections to {MONGO-SERVER-00}:27017 will be closed.
|
2017-08-22 09:58:59 DEBUG connection:56 - Invalidating the connection pool
|
I have added the workaround for the maxConnectionIdleTime. A quick look seems to suggest that it works but I still need to test if there are exception during the pruning period of the connections.
jeff.yemin
What was the fix for the Atlas free teer? I'm wondering if we might need to duplicate it on our own clusters.
|
|
Hi Jeff,
Apologies for the late response.
Thanks for letting us know about deployed changes. I'll get back if I find further issues.
Cheers
|
|
The Atlas Free Tier proxy changes have been deployed, and my test program now works properly with the maxConnectionIdleTime workaround removed.
As I haven't heard back from you regarding my last question, I am closing this issue. Please comment back if you're still having problems, and we can investigate further.
|
|
I'm testing now with the following code:
String uri = ...
|
MongoClientOptions.Builder builder = MongoClientOptions.builder()
|
.maxConnectionIdleTime(1000 * 60 * 4);
|
MongoClient client = new MongoClient(new MongoClientURI(uri, builder));
|
|
// noinspection InfiniteLoopStatement
|
for (; ; ) {
|
try {
|
System.out.println(client.getDatabase("admin").runCommand(new Document("ping", 1)));
|
Thread.sleep(1000 * 60 * 6);
|
} catch (InterruptedException e) {
|
e.printStackTrace();
|
}
|
}
|
}
|
And so far no exceptions. With INFO logging enabled I see:
09:55:30.293 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[cluster0-shard-00-00-1endg.mongodb.net:27017, cluster0-shard-00-01-1endg.mongodb.net:27017, cluster0-shard-00-02-1endg.mongodb.net:27017], mode=MULTIPLE, requiredClusterType=REPLICA_SET, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500, requiredReplicaSetName='Cluster0-shard-0'}
|
09:55:30.298 [main] INFO org.mongodb.driver.cluster - Adding discovered server cluster0-shard-00-00-1endg.mongodb.net:27017 to client view of cluster
|
09:55:30.402 [main] INFO org.mongodb.driver.cluster - Adding discovered server cluster0-shard-00-01-1endg.mongodb.net:27017 to client view of cluster
|
09:55:30.405 [main] INFO org.mongodb.driver.cluster - Adding discovered server cluster0-shard-00-02-1endg.mongodb.net:27017 to client view of cluster
|
09:55:30.436 [main] INFO org.mongodb.driver.cluster - No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=cluster0-shard-00-02-1endg.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=cluster0-shard-00-01-1endg.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, ServerDescription{address=cluster0-shard-00-00-1endg.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
|
09:55:30.898 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-01-1endg.mongodb.net:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:3, serverValue:10629}] to cluster0-shard-00-01-1endg.mongodb.net:27017
|
09:55:30.898 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-00-1endg.mongodb.net:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:1, serverValue:61601}] to cluster0-shard-00-00-1endg.mongodb.net:27017
|
09:55:30.898 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-02-1endg.mongodb.net:27017] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:2, serverValue:11087}] to cluster0-shard-00-02-1endg.mongodb.net:27017
|
09:55:30.928 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-02-1endg.mongodb.net:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=cluster0-shard-00-02-1endg.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 4]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=24423901, setName='Cluster0-shard-0', canonicalAddress=cluster0-shard-00-02-1endg.mongodb.net:27017, hosts=[cluster0-shard-00-02-1endg.mongodb.net:27017, cluster0-shard-00-01-1endg.mongodb.net:27017, cluster0-shard-00-00-1endg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-00-1endg.mongodb.net:27017', tagSet=TagSet{[]}, electionId=null, setVersion=1, lastWriteDate=Thu Jun 29 09:55:30 EDT 2017, lastUpdateTimeNanos=605716872164901}
|
09:55:30.928 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-01-1endg.mongodb.net:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=cluster0-shard-00-01-1endg.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 4]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=27265090, setName='Cluster0-shard-0', canonicalAddress=cluster0-shard-00-01-1endg.mongodb.net:27017, hosts=[cluster0-shard-00-02-1endg.mongodb.net:27017, cluster0-shard-00-01-1endg.mongodb.net:27017, cluster0-shard-00-00-1endg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-00-1endg.mongodb.net:27017', tagSet=TagSet{[]}, electionId=null, setVersion=1, lastWriteDate=Thu Jun 29 09:55:30 EDT 2017, lastUpdateTimeNanos=605716874405036}
|
09:55:30.928 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-00-1endg.mongodb.net:27017] INFO org.mongodb.driver.cluster - Monitor thread successfully connected to server with description ServerDescription{address=cluster0-shard-00-00-1endg.mongodb.net:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 4]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=24483467, setName='Cluster0-shard-0', canonicalAddress=cluster0-shard-00-00-1endg.mongodb.net:27017, hosts=[cluster0-shard-00-02-1endg.mongodb.net:27017, cluster0-shard-00-01-1endg.mongodb.net:27017, cluster0-shard-00-00-1endg.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-00-1endg.mongodb.net:27017', tagSet=TagSet{[]}, electionId=7fffffff0000000000000005, setVersion=1, lastWriteDate=Thu Jun 29 09:55:30 EDT 2017, lastUpdateTimeNanos=605716872174227}
|
09:55:30.930 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-00-1endg.mongodb.net:27017] INFO org.mongodb.driver.cluster - Setting max election id to 7fffffff0000000000000005 from replica set primary cluster0-shard-00-00-1endg.mongodb.net:27017
|
09:55:30.930 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-00-1endg.mongodb.net:27017] INFO org.mongodb.driver.cluster - Setting max set version to 1 from replica set primary cluster0-shard-00-00-1endg.mongodb.net:27017
|
09:55:30.930 [cluster-ClusterId{value='595506d2591cf03b6c5a03ec', description='null'}-cluster0-shard-00-00-1endg.mongodb.net:27017] INFO org.mongodb.driver.cluster - Discovered replica set primary cluster0-shard-00-00-1endg.mongodb.net:27017
|
09:55:31.169 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:54368}] to cluster0-shard-00-00-1endg.mongodb.net:27017
|
Document{{ok=1}}
|
10:00:30.390 [pool-1-thread-1] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:4, serverValue:54368}] to cluster0-shard-00-00-1endg.mongodb.net:27017 because it is past its maximum allowed idle time.
|
10:01:31.597 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:56144}] to cluster0-shard-00-00-1endg.mongodb.net:27017
|
Document{{ok=1}}
|
10:06:30.309 [pool-1-thread-1] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:5, serverValue:56144}] to cluster0-shard-00-00-1endg.mongodb.net:27017 because it is past its maximum allowed idle time.
|
Document{{ok=1}}
|
...
|
Can you post your test program and client logs?
|
|
Hi @Jeff Yemin,
After having the workaround in place for a bit, I've observed the same issue on my side.
2017-06-28 14:44:32 com.mongodb.diagnostics.logging.JULLogger log [WARNING] Got socket exception on connection [connectionId{localValue:5, serverValue:46062}] to cluster0-shard-xxx:27017. All connections to cluster0-shard-xxx:27017 will be closed.
|
2017-06-28 14:44:32 com.mongodb.diagnostics.logging.JULLogger log [INFO] Closed connection [connectionId{localValue:5, serverValue:46062}] to cluster0-shard-xxx:27017 because there was a socket exception raised on another connection from this pool.
|
com.mongodb.MongoSocketWriteException: Exception sending message
|
at com.mongodb.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:465)
|
at com.mongodb.connection.InternalStreamConnection.access$1000(InternalStreamConnection.java:66)
|
at com.mongodb.connection.InternalStreamConnection$2.failed(InternalStreamConnection.java:323)
|
at com.mongodb.connection.netty.NettyStream$3.operationComplete(NettyStream.java:187)
|
at com.mongodb.connection.netty.NettyStream$3.operationComplete(NettyStream.java:183)
|
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
|
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
|
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
|
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:906)
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:791)
|
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1291)
|
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:739)
|
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:731)
|
at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
|
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1090)
|
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1137)
|
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1079)
|
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
|
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
|
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
|
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
|
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
|
at java.lang.Thread.run(Thread.java:745)
|
Caused by: java.nio.channels.ClosedChannelException
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
|
|
|
@Jeff Yemin: Great news, thanks for looking into it and the workaround.
|
|
We've confirmed that this is an implementation issue with Atlas free tier. We'll notify you once a fix has been deployed.
Thank you for the bug report!
|
|
We're looking into a possible root cause in the Atlas free tier implementation, but in the meantime you can work around this using the MongoClientOptions.maxConnectionIdleTime property. Setting it to 4000 will force any connection that has been idle for more than 4 minutes to be closed, and should avoid any application-visible exceptions to be raised.
|
|
@Jeff Yemin: Free tier cluster so far. I encountered this problem at the beginning of testing.
|
|
munsuri are you connecting to an Atlas free tier cluster or a paid cluster?
|
|
I can get a similar exception using the synchronous driver:
MongoClient client = new MongoClient(new MongoClientURI("<uri>"));
|
for (; ; ) {
|
try {
|
client.getDatabase("admin").runCommand(new Document("ping", 1));
|
Thread.sleep(1000 * 60 * 10);
|
} catch (InterruptedException e) {
|
e.printStackTrace();
|
}
|
}
|
15:33:24.577 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:4, serverValue:19854}] to cluster0-shard-00-00-1endg.mongodb.net:27017
|
15:43:24.613 [main] WARN org.mongodb.driver.connection - Got socket exception on connection [connectionId{localValue:4, serverValue:19854}] to cluster0-shard-00-00-1endg.mongodb.net:27017. All connections to cluster0-shard-00-00-1endg.mongodb.net:27017 will be closed.
|
15:43:24.614 [main] INFO org.mongodb.driver.connection - Closed connection [connectionId{localValue:4, serverValue:19854}] to cluster0-shard-00-00-1endg.mongodb.net:27017 because there was a socket exception raised by this connection.
|
Exception in thread "main" com.mongodb.MongoSocketReadException: Prematurely reached end of stream
|
at com.mongodb.connection.SocketStream.read(SocketStream.java:88)
|
at com.mongodb.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:494)
|
at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:224)
|
at com.mongodb.connection.UsageTrackingInternalConnection.receiveMessage(UsageTrackingInternalConnection.java:96)
|
at com.mongodb.connection.DefaultConnectionPool$PooledConnection.receiveMessage(DefaultConnectionPool.java:440)
|
at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:112)
|
at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:168)
|
at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289)
|
at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:176)
|
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:216)
|
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:187)
|
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:179)
|
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:92)
|
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:85)
|
at com.mongodb.operation.CommandReadOperation.execute(CommandReadOperation.java:55)
|
at com.mongodb.Mongo.execute(Mongo.java:836)
|
at com.mongodb.Mongo$2.execute(Mongo.java:823)
|
at com.mongodb.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:137)
|
at com.mongodb.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:131)
|
at com.mongodb.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:121)
|
at JAVA2535.main(JAVA2535.java:27)
|
This is with an Atlas free tier cluster.
|
|
Hi jork_zijlstra,
Apologies for the delay, this is something that I'm now actively investigating. From the reports it looks like an integration issue with netty's SSLEngine, so I'll look into how that works and if there are any specific failure scenarios causing the issue.
Ross
|
|
Hi ross.lawley,
I added an hartbeat in our application, which trigger a Mongodb search to return 0 records. I started at scheduling this every 10 minutes but the same error still occur. Eventually I set it to every minute and then the error seem to disappear.
I also tested using the MongoDB cluster form MLabs. The only difference in settings is that I can disable the SSL part, which is not an option for MongoDB Atlas. On the non-ssl version I put the heartbeat on 1, 2 and 5 minutes but the exception doesn't occur at all.
This is not entire what I was expecting, since I'm also using the NettyStreamFactoryFactory when I don't enable the SSL.
Enabling SSL seem to trigger these exceptions.
Any updates so far?
Regards, Jork
|
|
Hi Ross,
I thought it would be best to post this log in this issue to keep everything together.
Here is my application log on what Netty is actually doing. I replaced some values in it, but MONGO-SERVER-00 is my primairy MongoDB Atlas server.
I see at least meaningful messages:
2017-06-12 14:44:21,413 [WARN] from org.mongodb.driver.connection in nioEventLoopGroup-3-4 - Got socket exception on connection [connectionId{localValue:4, serverValue:21051675}] to {MONGO-SERVER-00}.mongodb.net:27017. All connections to {MONGO-SERVER-00}.mongodb.net:27017 will be closed.
|
2017-06-12 14:44:21,413 [DEBUG] from org.mongodb.driver.connection in nioEventLoopGroup-3-4 - Invalidating the connection pool
|
and
2017-06-12 14:44:21,415 [INFO] from org.mongodb.driver.connection in nioEventLoopGroup-3-4 - Closed connection [connectionId{localValue:4, serverValue:21051675}] to {MONGO-SERVER-00}.mongodb.net:27017 because there was a socket exception raised on another connection from this pool.
|
I hope this is helpfull.
Regards, Jork
|
|
Hi Ross,
I am encountering the same issues with the Scala driver on MongoDBAtlas.
As I am quite new to the mongo driver and netty in general, is there any suggested work around to get past this?
Happy to provide more information on request.
// dependency versions
|
"io.netty" % "netty-all" % "4.1.12.Final",
|
"org.mongodb.scala" %% "mongo-scala-driver" % "2.1.0"
|
// Connector code
|
import java.util.concurrent.TimeUnit
|
|
import com.mongodb.ConnectionString
|
import com.typesafe.config.ConfigFactory
|
import org.mongodb.scala.connection._
|
import org.mongodb.scala.{MongoClient, MongoClientSettings, MongoDatabase}
|
|
trait Connector {
|
def database: MongoDatabase
|
}
|
|
protected[mongo] object MongoConnector extends Connector {
|
|
private[this] lazy val config = ConfigFactory.load() getConfig "application.mongo"
|
|
private[this] lazy val hosts = config getString "hosts"
|
private[this] lazy val databaseName = config getString "database"
|
private[this] lazy val username = config getString "username"
|
private[this] lazy val password = config getString "password"
|
private[this] lazy val options = config getString "options"
|
|
private[this] lazy val usernamePassword = s"$username:$password@"
|
private[this] lazy val uri = s"mongodb://$usernamePassword$hosts/$databaseName$options"
|
|
val connectionString = new ConnectionString(uri)
|
val settings: MongoClientSettings = MongoClientSettings.builder()
|
.clusterSettings(ClusterSettings.builder().applyConnectionString(connectionString).build())
|
.connectionPoolSettings(ConnectionPoolSettings.builder().applyConnectionString(connectionString).maxConnectionIdleTime(1, TimeUnit.DAYS).build())
|
.serverSettings(ServerSettings.builder().build()).credentialList(connectionString.getCredentialList)
|
.socketSettings(SocketSettings.builder().applyConnectionString(connectionString).build())
|
.sslSettings(SslSettings.builder().enabled(true).build())
|
.streamFactoryFactory(NettyStreamFactoryFactory())
|
.build()
|
|
private[this] lazy val client: MongoClient = MongoClient(settings)
|
|
lazy val database: MongoDatabase = client.getDatabase(databaseName)
|
}
|
com.mongodb.MongoSocketWriteException: Exception sending message
|
at com.mongodb.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:465)
|
at com.mongodb.connection.InternalStreamConnection.access$1000(InternalStreamConnection.java:66)
|
at com.mongodb.connection.InternalStreamConnection$2.failed(InternalStreamConnection.java:323)
|
at com.mongodb.connection.netty.NettyStream$3.operationComplete(NettyStream.java:187)
|
at com.mongodb.connection.netty.NettyStream$3.operationComplete(NettyStream.java:183)
|
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
|
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
|
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
|
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
|
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
|
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
|
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
|
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
|
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
|
at java.lang.Thread.run(Thread.java:748)
|
Caused by: java.nio.channels.ClosedChannelException: null
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
|
|
|
For your reference, last try I left it for around 35 min on idle before requesting a query manually:
2017-06-09 17:00:07 HelloWorldEmbedded [INFO] doMongoFindWithOptions
|
2017-06-09 17:00:08 com.mongodb.diagnostics.logging.JULLogger log [INFO] Opened connection [connectionId{localValue:4, serverValue:369162}] to clusterxxx.mongodb.net:27017
|
2017-06-09 17:00:10 HelloWorldEmbedded [INFO] Result mongo doMongoFindWithOptions
|
2017-06-09 17:36:52 HelloWorldEmbedded [INFO] doMongoFindWithOptions
|
2017-06-09 17:36:52 com.mongodb.diagnostics.logging.JULLogger log [WARNING] Got socket exception on connection [connectionId{localValue:4, serverValue:369162}] to clustexxxmongodb.net:27017. All connections to clusterxxx.mongodb.net:27017 will be closed.
|
2017-06-09 17:36:52 com.mongodb.diagnostics.logging.JULLogger log [INFO] Closed connection [connectionId{localValue:4, serverValue:369162}] to clusterxxx.mongodb.net:27017 because there was a socket exception raised on another connection from this pool.
|
2017-06-09 17:36:52 com.mongodb.diagnostics.logging.JULLogger log [INFO] Discovered replica set primary clusterxxx.mongodb.net:27017
|
com.mongodb.MongoSocketWriteException: Exception sending message
|
at com.mongodb.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:465)
|
at com.mongodb.connection.InternalStreamConnection.access$1000(InternalStreamConnection.java:66)
|
at com.mongodb.connection.InternalStreamConnection$2.failed(InternalStreamConnection.java:323)
|
at com.mongodb.connection.netty.NettyStream$3.operationComplete(NettyStream.java:187)
|
at com.mongodb.connection.netty.NettyStream$3.operationComplete(NettyStream.java:183)
|
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
|
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
|
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
|
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:906)
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:791)
|
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1291)
|
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:739)
|
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:731)
|
at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
|
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1090)
|
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1137)
|
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1079)
|
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
|
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
|
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
|
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
|
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
|
at java.lang.Thread.run(Thread.java:745)
|
Caused by: java.nio.channels.ClosedChannelException
|
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
|
|
|
Hi Ross,
Here is a small test case. You can trigger the mongo queries by browsing to http://localhost:8080 or uncommenting the periodic task. Sometimes it takes time to get the exception. So far, the longer I leave it idle, the often it happens.
Please, note that you have to properly populate the static String attributes with your own values.
HelloWorldEmbedded.java
package io.vertx.example;
|
|
import io.vertx.core.Vertx;
|
import io.vertx.core.json.JsonObject;
|
import io.vertx.ext.mongo.MongoClient;
|
import io.vertx.ext.mongo.FindOptions;
|
|
import io.vertx.core.logging.Logger;
|
import io.vertx.core.logging.LoggerFactory;
|
|
//To run: mvn package && java -jar target/maven-simplest-3.4.1-fat.jar
|
|
public class HelloWorldEmbedded {
|
private static Logger logger = LoggerFactory.getLogger( HelloWorldEmbedded.class.getSimpleName() );
|
public final static String MONGODBATLAS = "mongodbatlas";
|
public final static String COLLECTION = "collection";
|
public final static String DBNAME = "dbname";
|
public final static String CONNECTION_STRING = "mongodb://xxx:xxx@clusterxxx.mongodb.net:27017," +
|
"clusterxxx.mongodb.net:27017,clusterxxx.mongodb.net:27017/dbname?" +
|
"streamType=netty&sslInvalidHostNameAllowed=true&ssl=true&replicaSet=Cluster0-shard-0&authSource=admin";
|
public final static int DOCSLIMIT = 20;
|
public static MongoClient mongo;
|
|
public static void main(String[] args) {
|
System.setProperty( "java.util.logging.SimpleFormatter.format", "%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %2$s [%4$s] %5$s%6$s%n" );
|
mongo = setupAtlasMongoDB( Vertx.vertx() );
|
|
Vertx.vertx().createHttpServer().requestHandler(req -> {
|
req.response().end("Hello World!");
|
doMongoFindWithOptions( mongo );
|
}).listen(8080);
|
|
/*Vertx.vertx().setPeriodic( 60 * 45 * 1000, h-> {
|
logger.info("Periodic mongo findWithOptions");
|
doMongoFindWithOptions( mongo );
|
});*/
|
}
|
|
public static void doMongoFindWithOptions( MongoClient mongo ){
|
FindOptions options = new FindOptions();
|
options.setLimit( DOCSLIMIT );
|
//options.setSkip( limit * ( page - 1 ) );
|
|
mongo
|
.findWithOptions( COLLECTION, new JsonObject(), options, results -> {
|
if( results.failed() )
|
results.cause().printStackTrace();
|
else
|
logger.info("Result mongo doMongoFindWithOptions");
|
} );
|
}
|
|
private static MongoClient setupAtlasMongoDB( Vertx vertx ){
|
System.setProperty("org.mongodb.async.type", "netty");
|
|
JsonObject mongoConfig = new JsonObject()
|
.put( "db_name", DBNAME )
|
.put( "connection_string", CONNECTION_STRING );
|
|
return MongoClient.createShared( vertx, mongoConfig, MONGODBATLAS );
|
}
|
}
|
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
|
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
|
<modelVersion>4.0.0</modelVersion>
|
|
<groupId>io.vertx</groupId>
|
<artifactId>maven-simplest</artifactId>
|
<version>3.4.1</version>
|
|
<properties>
|
<!-- the main class -->
|
<exec.mainClass>io.vertx.example.HelloWorldEmbedded</exec.mainClass>
|
</properties>
|
|
<dependencies>
|
<dependency>
|
<groupId>io.vertx</groupId>
|
<artifactId>vertx-core</artifactId>
|
<version>${project.version}</version>
|
</dependency>
|
|
<dependency>
|
<groupId>io.vertx</groupId>
|
<artifactId>vertx-web</artifactId>
|
<version>${project.version}</version>
|
</dependency>
|
|
<!--
|
[INFO] Including org.mongodb:mongodb-driver-async:jar:3.4.1 in the shaded jar.
|
[INFO] Including org.mongodb:mongodb-driver-core:jar:3.4.1 in the shaded jar.
|
[INFO] Including org.mongodb:bson:jar:3.4.1 in the shaded jar.
|
-->
|
<dependency>
|
<groupId>io.vertx</groupId>
|
<artifactId>vertx-mongo-client</artifactId>
|
<version>${project.version}</version>
|
</dependency>
|
</dependencies>
|
|
<build>
|
|
<pluginManagement>
|
<plugins>
|
<!-- We specify the Maven compiler plugin as we need to set it to Java 1.8 -->
|
<plugin>
|
<artifactId>maven-compiler-plugin</artifactId>
|
<version>3.1</version>
|
<configuration>
|
<source>1.8</source>
|
<target>1.8</target>
|
</configuration>
|
</plugin>
|
</plugins>
|
</pluginManagement>
|
|
<!--
|
You only need the part below if you want to build your application into a fat executable jar.
|
This is a jar that contains all the dependencies required to run it, so you can just run it with
|
java -jar
|
-->
|
<plugins>
|
<plugin>
|
<groupId>org.apache.maven.plugins</groupId>
|
<artifactId>maven-shade-plugin</artifactId>
|
<version>2.3</version>
|
<executions>
|
<execution>
|
<phase>package</phase>
|
<goals>
|
<goal>shade</goal>
|
</goals>
|
<configuration>
|
<transformers>
|
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
|
<manifestEntries>
|
<Main-Class>${exec.mainClass}</Main-Class>
|
</manifestEntries>
|
</transformer>
|
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
|
<resource>META-INF/services/io.vertx.core.spi.VerticleFactory</resource>
|
</transformer>
|
</transformers>
|
<artifactSet>
|
</artifactSet>
|
<outputFile>${project.build.directory}/${project.artifactId}-${project.version}-fat.jar</outputFile>
|
</configuration>
|
</execution>
|
</executions>
|
</plugin>
|
|
<plugin>
|
<groupId>org.codehaus.mojo</groupId>
|
<artifactId>exec-maven-plugin</artifactId>
|
<version>1.4.0</version>
|
<executions>
|
<execution>
|
<!-- run the application using the fat jar -->
|
<id>run-app</id>
|
<goals>
|
<goal>exec</goal>
|
</goals>
|
<configuration>
|
<executable>java</executable>
|
<arguments>
|
<argument>-jar</argument>
|
<argument>target/${project.artifactId}-${project.version}-fat.jar</argument>
|
</arguments>
|
</configuration>
|
</execution>
|
</executions>
|
</plugin>
|
</plugins>
|
</build>
|
|
<profiles>
|
<profile>
|
<id>staging</id>
|
<repositories>
|
<repository>
|
<id>staging</id>
|
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
|
</repository>
|
</repositories>
|
</profile>
|
</profiles>
|
|
</project>
|
Hope it helps,
Iván
|
|
Hi munsuri,
Thanks for the ticket, do you think you could create a test case that reproduces the issue? It would help us identify the cause.
Ross
|
Generated at Thu Feb 08 08:57:28 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.