[SERVER-20962] Improve UX for the new config servers as replica sets Created: 16/Oct/15  Updated: 17/Nov/15  Resolved: 14/Nov/15

Status: Closed
Project: Core Server
Component/s: Sharding
Affects Version/s: None
Fix Version/s: 3.2.0-rc3

Type: Improvement Priority: Major - P3
Reporter: John Page Assignee: Kaloian Manassiev
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Depends
depends on SERVER-20970 Allow single-node sharding config ser... Closed
depends on SERVER-20971 Improve the --configdb server option ... Closed
Backwards Compatibility: Fully Compatible
Sprint: Sharding C (11/20/15)
Participants:

 Description   

Attempting to set up a simple sharded cluster with MongoDB 3.1.9 for training and learning purposes is now far harder and more confusing than it was and appears to fail at first in may ways . This provides a very bad first impression of MongoDB. Here are some points as to why.

3.1.9 requires a single node replica set - this itself causes issues as it's a breaking change and there are hundreds of getting started with MongoDB tutiorials that no longer work.

If you do make this mistake then add --replSet config to your config server you get an uninitialised Replica set which you can't use - but it's not obvious why. Even logging on you get a blank prompt not a config[NOT YET REPLICA] or something similar.

IF you try to start mongos with

mongos --configdb localhost:27019 --port 27017

You get

BadValue Must have either 3 node legacy config servers, or a replica set config server
try 'mongos --help' for more information

but mongos --help gives you Zero help as it says

Sharding options:
--configdb arg 1 or 3 comma separated config servers

There are no clues that you need

mongos --configdb config/localhost:27109

NOR does it infer 27019 as it did before or infer a default replica set name

Worst part - and this is a BUG I think

For approximately a minute after you get the mongos up, attempts to contact get

MacPro:~ jlp$ mongo --port 27017
MongoDB shell version: 3.1.9
connecting to: 127.0.0.1:27017/test
2015-10-16T09:47:27.251+0100 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
2015-10-16T09:47:27.251+0100 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:216:14
@(connect):1:6

exception: connect failed
MacPro:~ jlp$

in which time people stop it and try again as it appears to have failed

If you wait approximately three minutes then you get the following in the mongos log

2015-10-16T09:49:07.855+0100 W SHARDING [thread1] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document
2015-10-16T09:49:08.273+0100 I SHARDING [Balancer] about to contact config servers and shards
2015-10-16T09:49:08.273+0100 I NETWORK [mongosMain] waiting for connections on port 27017
2015-10-16T09:49:08.274+0100 I SHARDING [Balancer] config servers and shards contacted successfully
2015-10-16T09:49:08.274+0100 I SHARDING [Balancer] balancer id: MacPro.local:27017 started
2015-10-16T09:49:08.308+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired, ts : 5620ba04600c6c11c7053c0b
2015-10-16T09:49:08.394+0100 I SHARDING [Balancer] distributed lock with ts: 5620ba04600c6c11c7053c0b' unlocked.

and you can then connect.

Just the kind of UX that would turn me and many others away from MongoDB in that golden 30 minutes when I first try it.



 Comments   
Comment by Kaloian Manassiev [ 11/Nov/15 ]

The two UX issues in the related tickets have been fixed.

We were unable to reproduce the third problem, listed where connection could not be established to mongos for a minute. john.page@mongodb.com, please give it a try when RC3 comes out (or with the latest nightly) and if you are still able to reproduce it, file a new ticket.

Comment by Githook User [ 22/Oct/15 ]

Author:

{u'username': u'kaloianm', u'name': u'Kaloian Manassiev', u'email': u'kaloian.manassiev@mongodb.com'}

Message: SERVER-20962 Improve the shardCollection command logging

Upon completion of the shardCollection sequence, log shardCollection.end
in order to make searching in the logs (and in particular combined logs
from js tests) easier.
Branch: master
https://github.com/mongodb/mongo/commit/6a5514890015703157d5cc7a1d79ea04b4ec45da

Comment by John Page [ 19/Oct/15 ]

3 Terminal windows on OSX

Terminal 1 - MongoD (config, replset of 1)

Last login: Mon Oct 19 11:15:17 on ttys003
MacPro:~ jlp$ mongod --version
db version v3.2.0-rc0
git version: bf28bd20fa507c4d8cc5919dfbbe87b7750ae8b0
OpenSSL version: OpenSSL 0.9.8zg 14 July 2015
allocator: system
modules: enterprise 
build environment:
    distarch: x86_64
    target_arch: x86_64
MacPro:~ jlp$ mkdir /data/configtest
MacPro:~ jlp$ mongod --configsvr --replSet config --dbpath /data/configtest
2015-10-19T11:16:32.348+0100 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] MongoDB starting : pid=17665 port=27019 dbpath=/data/configtest 64-bit host=MacPro.local
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] 
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] db version v3.2.0-rc0
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] git version: bf28bd20fa507c4d8cc5919dfbbe87b7750ae8b0
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 0.9.8zg 14 July 2015
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] allocator: system
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] modules: enterprise 
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] build environment:
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten]     distarch: x86_64
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten]     target_arch: x86_64
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] options: { replication: { replSet: "config" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/configtest" } }
2015-10-19T11:16:32.901+0100 I REPL     [initandlisten] Did not find local voted for document at startup;  NoMatchingDocument Did not find replica set lastVote document in local.replset.election
2015-10-19T11:16:32.901+0100 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
2015-10-19T11:16:32.901+0100 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2015-10-19T11:16:32.901+0100 I FTDC     [initandlisten] Starting full-time diagnostic data capture with directory '/data/configtest/diagnostic.data'
2015-10-19T11:16:32.940+0100 I NETWORK  [initandlisten] waiting for connections on port 27019
2015-10-19T11:16:42.499+0100 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49216 #1 (1 connection now open)
2015-10-19T11:16:46.736+0100 I COMMAND  [conn1] initiate : no configuration specified. Using a default configuration for the set
2015-10-19T11:16:46.736+0100 I COMMAND  [conn1] created this configuration for initiation : { _id: "config", version: 1, members: [ { _id: 0, host: "MacPro.local:27019" } ] }
2015-10-19T11:16:46.738+0100 I REPL     [conn1] replSetInitiate admin command received from client
2015-10-19T11:16:46.867+0100 I REPL     [conn1] replSetInitiate config object with 1 members parses ok
2015-10-19T11:16:47.986+0100 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "config", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "MacPro.local:27019", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2015-10-19T11:16:47.986+0100 I REPL     [ReplicationExecutor] This node is MacPro.local:27019 in the config
2015-10-19T11:16:47.986+0100 I REPL     [ReplicationExecutor] transition to STARTUP2
2015-10-19T11:16:47.990+0100 I REPL     [conn1] ******
2015-10-19T11:16:47.990+0100 I REPL     [conn1] creating replication oplog of size: 5MB...
2015-10-19T11:16:48.013+0100 I STORAGE  [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs
2015-10-19T11:16:48.013+0100 I STORAGE  [conn1] Scanning the oplog to determine where to place markers for when to truncate
2015-10-19T11:16:48.234+0100 I REPL     [conn1] ******
2015-10-19T11:16:48.236+0100 I REPL     [conn1] Starting replication applier threads
2015-10-19T11:16:48.238+0100 I REPL     [ReplicationExecutor] transition to RECOVERING
2015-10-19T11:16:48.239+0100 I COMMAND  [conn1] command local.oplog.rs command: replSetInitiate { replSetInitiate: undefined } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:210 locks:{ Global: { acquireCount: { r: 6, w: 4, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1386 } }, Database: { acquireCount: { w: 2, W: 2 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 1534ms
2015-10-19T11:16:48.240+0100 I REPL     [ReplicationExecutor] transition to SECONDARY
2015-10-19T11:16:48.240+0100 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
2015-10-19T11:16:48.240+0100 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2015-10-19T11:16:48.266+0100 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
2015-10-19T11:16:48.266+0100 I REPL     [ReplicationExecutor] transition to PRIMARY
2015-10-19T11:16:49.249+0100 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2015-10-19T11:17:08.329+0100 I NETWORK  [conn1] end connection 127.0.0.1:49216 (0 connections now open)
2015-10-19T11:17:34.558+0100 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49293 #2 (1 connection now open)
2015-10-19T11:17:34.560+0100 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49294 #3 (2 connections now open)
2015-10-19T11:20:05.666+0100 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49473 #4 (3 connections now open)
2015-10-19T11:20:05.666+0100 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:49474 #5 (4 connections now open)
2015-10-19T11:20:05.846+0100 I INDEX    [conn5] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" }
2015-10-19T11:20:05.846+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:05.853+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:05.873+0100 I INDEX    [conn5] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" }
2015-10-19T11:20:05.873+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:05.885+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:05.916+0100 I INDEX    [conn5] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" }
2015-10-19T11:20:05.916+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:05.928+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:05.985+0100 I INDEX    [conn5] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" }
2015-10-19T11:20:05.985+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:05.991+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:06.030+0100 I INDEX    [conn5] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" }
2015-10-19T11:20:06.030+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:06.043+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:06.076+0100 I INDEX    [conn5] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" }
2015-10-19T11:20:06.076+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:06.086+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:06.111+0100 I INDEX    [conn5] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" }
2015-10-19T11:20:06.111+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:06.121+0100 I INDEX    [conn5] build index done.  scanned 1 total records. 0 secs
2015-10-19T11:20:06.159+0100 I INDEX    [conn5] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" }
2015-10-19T11:20:06.159+0100 I INDEX    [conn5] 	 building index using bulk method
2015-10-19T11:20:06.165+0100 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
2015-10-19T11:20:37.051+0100 I COMMAND  [conn5] command config.$cmd command: findAndModify { findAndModify: "lockpings", query: { _id: "MacPro.local:27017:1445249854:-672485777" }, update: { $set: { ping: new Date(1445250035703) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1445250035703) } } ntoreturn:1 ntoskip:0 keyUpdates:1 writeConflicts:0 numYields:0 reslen:405 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 1348ms
2015-10-19T11:20:37.058+0100 I COMMAND  [conn4] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "MacPro.local:27017" }, u: { $set: { _id: "MacPro.local:27017", ping: new Date(1445250036389), up: 30, waiting: false, mongoVersion: "3.2.0-rc0" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:360 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 667ms

Terminal 2 - Mongos:

Last login: Mon Oct 19 11:15:22 on ttys004
MacPro:~ jlp$ mongos --configdb config/localhost:27019
2015-10-19T11:17:34.490+0100 W SHARDING [main] running with less than 3 config servers should be done only for testing purposes and is not recommended for production
2015-10-19T11:17:34.541+0100 I SHARDING [mongosMain] MongoS version 3.2.0-rc0 starting: pid=17673 port=27017 64-bit host=MacPro.local (--help for usage)
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] db version v3.2.0-rc0
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] git version: bf28bd20fa507c4d8cc5919dfbbe87b7750ae8b0
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] OpenSSL version: OpenSSL 0.9.8zg 14 July 2015
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] allocator: system
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] modules: enterprise 
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] build environment:
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain]     distarch: x86_64
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain]     target_arch: x86_64
2015-10-19T11:17:34.542+0100 I CONTROL  [mongosMain] options: { sharding: { configDB: "config/localhost:27019" } }
2015-10-19T11:17:34.542+0100 I SHARDING [mongosMain] Updating config server connection string to: config/localhost:27019
2015-10-19T11:17:34.542+0100 I NETWORK  [mongosMain] Starting new replica set monitor for config/localhost:27019
2015-10-19T11:17:34.542+0100 I NETWORK  [ReplicaSetMonitorWatcher] starting
2015-10-19T11:17:34.556+0100 I SHARDING [thread1] creating distributed lock ping thread for process MacPro.local:27017:1445249854:-672485777 (sleeping for 30000ms)
2015-10-19T11:17:34.559+0100 I NETWORK  [replSetDistLockPinger] changing hosts to config/MacPro.local:27019 from config/localhost:27019
2015-10-19T11:17:34.559+0100 I SHARDING [replSetDistLockPinger] Updating config server connection string to: config/MacPro.local:27019
2015-10-19T11:20:05.667+0100 I ASIO     [NetworkInterfaceASIO] Successfully connected to MacPro.local:27019
2015-10-19T11:20:05.667+0100 I ASIO     [NetworkInterfaceASIO] Successfully connected to MacPro.local:27019
2015-10-19T11:20:05.699+0100 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document
2015-10-19T11:20:06.169+0100 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2015-10-19T11:20:06.169+0100 I SHARDING [Balancer] about to contact config servers and shards
2015-10-19T11:20:06.170+0100 I SHARDING [Balancer] config servers and shards contacted successfully
2015-10-19T11:20:06.170+0100 I SHARDING [Balancer] balancer id: MacPro.local:27017 started
2015-10-19T11:20:06.179+0100 I NETWORK  [mongosMain] waiting for connections on port 27017
2015-10-19T11:20:06.225+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5624c3d69dc0529e378c0aeb
2015-10-19T11:20:06.296+0100 I SHARDING [Balancer] distributed lock with ts: 5624c3d69dc0529e378c0aeb' unlocked.
2015-10-19T11:20:16.328+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5624c3e09dc0529e378c0aed
2015-10-19T11:20:16.349+0100 I SHARDING [Balancer] distributed lock with ts: 5624c3e09dc0529e378c0aed' unlocked.
2015-10-19T11:20:26.373+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5624c3ea9dc0529e378c0aef
2015-10-19T11:20:26.384+0100 I SHARDING [Balancer] distributed lock with ts: 5624c3ea9dc0529e378c0aef' unlocked.
2015-10-19T11:20:37.068+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5624c3f59dc0529e378c0af1
2015-10-19T11:20:37.080+0100 I SHARDING [Balancer] distributed lock with ts: 5624c3f59dc0529e378c0af1' unlocked.
2015-10-19T11:20:41.012+0100 I NETWORK  [mongosMain] connection accepted from 127.0.0.1:49530 #1 (1 connection now open)
2015-10-19T11:20:47.220+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5624c3ff9dc0529e378c0af3
2015-10-19T11:20:47.234+0100 I SHARDING [Balancer] distributed lock with ts: 5624c3ff9dc0529e378c0af3' unlocked.
2015-10-19T11:20:59.542+0100 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 5624c40b9dc0529e378c0af5
2015-10-19T11:20:59.557+0100 I SHARDING [Balancer] distributed lock with ts: 5624c40b9dc0529e378c0af5' unlocked.

Terminal 3 - Mongo:

Last login: Sun Oct 18 12:14:14 on ttys003
MacPro:~ jlp$ mongo --port 27019
MongoDB shell version: 3.2.0-rc0
connecting to: 127.0.0.1:27019/test
Server has startup warnings: 
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] 
2015-10-19T11:16:32.865+0100 I CONTROL  [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
MongoDB Enterprise > rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "MacPro.local:27019",
	"ok" : 1
}
MongoDB Enterprise config:SECONDARY> 
MongoDB Enterprise config:PRIMARY> exit
bye
MacPro:~ jlp$ mongo
MongoDB shell version: 3.2.0-rc0
connecting to: test
2015-10-19T11:17:38.744+0100 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
2015-10-19T11:17:38.749+0100 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:224:14
@(connect):1:6
 
exception: connect failed
MacPro:~ jlp$ date
Mon 19 Oct 2015 11:17:43 BST
MacPro:~ jlp$ date
Mon 19 Oct 2015 11:18:34 BST
MacPro:~ jlp$ mongo
MongoDB shell version: 3.2.0-rc0
connecting to: test
2015-10-19T11:18:37.507+0100 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
2015-10-19T11:18:37.510+0100 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:224:14
@(connect):1:6
 
exception: connect failed
MacPro:~ jlp$ date
Mon 19 Oct 2015 11:20:39 BST
MacPro:~ jlp$ mongo
MongoDB shell version: 3.2.0-rc0
connecting to: test
MongoDB Enterprise mongos> 

Comment by Kaloian Manassiev [ 16/Oct/15 ]

john.page - I am unable to reproduce the last part of your report, where mongos gets stuck at startup and refuses connections. Are you able to reproduce it or could you attach the complete logs from the config server and the mongos instance?

Generated at Thu Feb 08 03:55:48 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.