[SERVER-37119] try to connect version 3.6.4 with client version 4.0.2 Created: 13/Sep/18  Updated: 16/Nov/21  Resolved: 18/Sep/18

Status: Closed
Project: Core Server
Component/s: Internal Client
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: lee mingyu Assignee: Nick Brewer
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates SERVER-34746 Segmentation fault when shard is star... Closed
Operating System: ALL
Steps To Reproduce:

----------------
reference
----------------
https://docs.mongodb.com/master/tutorial/install-mongodb-on-red-hat/

----------------
download
----------------
in https://repo.mongodb.org/yum/redhat/6/mongodb-org/4.0/x86_64/RPMS/
in https://repo.mongodb.org/yum/redhat/6Server/mongodb-org/4.0/x86_64/RPMS/
mongodb-org-shell-4.0.2-1.el6.x86_64.rpm
mongodb-org-shell-4.0.2-1.el6.x86_64.rpm

----------------
install
----------------
rpm -ivh mongodb-org-shell-4.0.2-1.el6.x86_64.rpm

--------------------------------
try to connect
--------------------------------
try to connect : /usr/bin/mongo $HOST:$PORT/admin -u dba -p'xxx' --quiet --eval "db.serverStatus().opcounters" (target mongo: version 3.6.4)

--------------------------------
target mongod down
--------------------------------
mongod.log

2018-09-13T21:46:13.118+0900 I NETWORK [conn1691] received client metadata from xxxxx conn1691: { application:

{ name: "MongoDB Shell" }

, driver: { name: "MongoDB Internal Client", version: "4.0.2" }, os: { type: "Linux", name: "CentOS release 6.5 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-642.6.2.el6.x86_64" } }
2018-09-13T21:46:13.141+0900 I ACCESS [conn1691] Successfully authenticated as principal dba on admin
2018-09-13T21:46:13.141+0900 F - [conn1691] Invalid access at address: 0
2018-09-13T21:46:13.158+0900 F - [conn1691] Got signal: 11 (Segmentation fault).

0x562d48601b71 0x562d48600d89 0x562d486013f6 0x7f8a6bb975e0 0x562d47f09403 0x562d47024f4c 0x562d47028a24 0x562d47029777 0x562d47035faa 0x562d47031957 0x562d47034d91 0x562d47f36092 0x562d470307bf 0x562d47032d05 0x562d470335fb 0x562d470319dd 0x562d47034d91 0x562d47f365f5 0x562d484cb194 0x7f8a6bb8fe25 0x7f8a6b8bd34d
----- BEGIN BACKTRACE -----

{"backtrace":[\{"b":"562D46423000","o":"21DEB71","s":"_ZN5mongo15printStackTraceERSo"}

,{"b":"562D46423000","o":"21DDD89"},{"b":"562D46423000","o":"21DE3F6"},{"b":"7F8A6BB88000","o":"F5E0"},{"b":"562D46423000","o":"1AE6403","s":"_ZN5mongo30initializeOperationSessionInfoEPNS_16OperationContextERKNS_7BSONObjEbbb"},{"b":"562D46423000","o":"C01F4C"},{"b":"562D46423000","o":"C05A24"},{"b":"562D46423000","o":"C06777","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"562D46423000","o":"C12FAA","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"562D46423000","o":"C0E957","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"562D46423000","o":"C11D91"},{"b":"562D46423000","o":"1B13092","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"562D46423000","o":"C0D7BF","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"562D46423000","o":"C0FD05","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"562D46423000","o":"C105FB","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"562D46423000","o":"C0E9DD","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"562D46423000","o":"C11D91"},{"b":"562D46423000","o":"1B135F5"},{"b":"562D46423000","o":"20A8194"},{"b":"7F8A6BB88000","o":"7E25"},{"b":"7F8A6B7C5000","o":"F834D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.6.4", "gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856", "compiledModules" : [], "uname" :

{ "sysname" : "Linux", "release" : "3.10.0-693.21.1.el7.x86_64", "version" : "#1 SMP Wed Mar 7 19:03:37 UTC 2018", "machine" : "x86_64" }

, "somap" : [ { "b" : "562D46423000", "elfType" : 3, "buildId" : "9E8992AF64DDDA5CD452F1A1FFBB558210B8AD34" }, { "b" : "7FFFF5EA1000", "elfType" : 3, "buildId" : "228ADFE0D8C0852BF24F80B24803DA9E25F5B21E" }, { "b" : "7F8A6C6C8000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "FF4E72F4E574E143330FB3C66DB51613B0EC65EA" }, { "b" : "7F8A6C4C0000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "6D322588B36D2617C03C0F3B93677E62FCFFDA81" }, { "b" : "7F8A6C2BC000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1E42EBFB272D37B726F457D6FE3C33D2B094BB69" }, { "b" : "7F8A6BFBA000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "808BD35686C193F218A5AAAC6194C49214CFF379" }, { "b" : "7F8A6BDA4000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3, "buildId" : "C344A7E6783B19A5C763AC24746EC6BAD2607F28" }, { "b" : "7F8A6BB88000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "A48D21B2578A8381FBD8857802EAA660504248DC" }, { "b" : "7F8A6B7C5000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "95FF02A4BEBABC573C7827A66D447F7BABDDAA44" }, { "b" : "7F8A6C8E2000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "22FA66DA7D14C88BF36C69454A357E5F1DEFAE4E" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x562d48601b71]
mongod(+0x21DDD89) [0x562d48600d89]
mongod(+0x21DE3F6) [0x562d486013f6]
libpthread.so.0(+0xF5E0) [0x7f8a6bb975e0]
mongod(_ZN5mongo30initializeOperationSessionInfoEPNS_16OperationContextERKNS_7BSONObjEbbb+0x293) [0x562d47f09403]
mongod(+0xC01F4C) [0x562d47024f4c]
mongod(+0xC05A24) [0x562d47028a24]
mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x2B7) [0x562d47029777]
mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xBA) [0x562d47035faa]
mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x97) [0x562d47031957]
mongod(+0xC11D91) [0x562d47034d91]
mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x1A2) [0x562d47f36092]
mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x15F) [0x562d470307bf]
mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0xAF5) [0x562d47032d05]
mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x23B) [0x562d470335fb]
mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x11D) [0x562d470319dd]
mongod(+0xC11D91) [0x562d47034d91]
mongod(+0x1B135F5) [0x562d47f365f5]
mongod(+0x20A8194) [0x562d484cb194]
libpthread.so.0(+0x7E25) [0x7f8a6bb8fe25]
libc.so.6(clone+0x6D) [0x7f8a6b8bd34d]
----- END BACKTRACE -----

 

Participants:

 Description   

I try to connect mongod with mongo shell 

but I connected and mongo was down in no time

 

 



 Comments   
Comment by lee mingyu [ 19/Sep/18 ]

thank you for helping 

Comment by Nick Brewer [ 18/Sep/18 ]

ee400 I believe the issue you're describing is a duplicate of: SERVER-34746

As that ticket notes, the behavior of starting a mongod with --shardsvr directly, outside of a sharded cluster, is not the intended use-case for that option. However the underlying bug that results in a crash has been fixed in version 3.6.5. While the usage you've described is not the intended one, if this is something you need I suggest upgrading your 3.6 nodes to version 3.6.8.

-Nick

Comment by lee mingyu [ 18/Sep/18 ]

and I have the same option in mongo 3.6.5

but this server is not down  when I connect to that with 4.0.2 shell

Comment by lee mingyu [ 18/Sep/18 ]

and I have the same option in mongo 3.4.4 

but this server is not down  when I connect to that with 4.0.2 shell

Comment by lee mingyu [ 18/Sep/18 ]

in spite of replica set 

I used  this option

"sharding":

{ "clusterRole": "shardsvr" }

so I remove and restart and retest, I am not able to reproduce

I have experienced some of the trouble (with sharding XXXXX option in replica set)
I want mongo to notify me warning message if there is not properly configurated 

Comment by lee mingyu [ 18/Sep/18 ]

this is output

{
"argv": [
"/data/MongoDB/bin/mongod",
"-f",
"/data/datalake_replica_20012/conf/mongod.conf"
],
"parsed": {
"config": "/data/datalake_replica_20012/conf/mongod.conf",
"net": {
"bindIp": "0.0.0.0",
"port": 20012,
"unixDomainSocket":

{ "enabled": true, "pathPrefix": "/data/datalake_replica_20012/tmp" }

},
"operationProfiling":

{ "mode": "slowOp", "slowOpThresholdMs": 1000 }

,
"processManagement":

{ "fork": true, "pidFilePath": "/data/datalake_replica_20012/tmp/mongod.pid" }

,
"replication":

{ "oplogSizeMB": 10000, "replSetName": "datalake_replica" }

,
"security":

{ "authorization": "enabled", "javascriptEnabled": true, "keyFile": "/data/datalake_replica_20012/tmp/auth.key" }

,
"sharding":

{ "clusterRole": "shardsvr" }

,
"storage": {
"dbPath": "/data/datalake_replica_20012/data",
"directoryPerDB": true,
"engine": "wiredTiger",
"journal":

{ "commitIntervalMs": 100, "enabled": true }

,
"syncPeriodSecs": 60,
"wiredTiger": {
"collectionConfig":

{ "blockCompressor": "snappy" }

,
"engineConfig":

{ "cacheSizeGB": 20, "journalCompressor": "snappy" }

,
"indexConfig":

{ "prefixCompression": true }

}
},
"systemLog":

{ "destination": "file", "logAppend": false, "path": "/data/datalake_replica_20012/logs/mongod.log" }

},
"ok": 1
}

 

and I executed 

db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
{
"featureCompatibilityVersion":

{ "version": "3.4" }

,
"ok": 1
}

 

 

Comment by Nick Brewer [ 14/Sep/18 ]

ee400 6 / 6Server is a RedHat convention, it is explained in greater detail here (under the "Red Hat Software Repositories" heading).

I haven't been able to recreate this in my testing with a 3.6.4 server and a 4.0.2 shell:

[4.0.2]# bin/mongo 127.0.0.1:27017/admin -u root -p abc123 --quiet  --eval "db.serverStatus().opcounters"
{
	"insert" : 0,
	"query" : 1,
	"update" : 0,
	"delete" : 0,
	"getmore" : 0,
	"command" : 8
}

I'll need the getCmdLineOpts output to get a more accurate reproduction.

-Nick

Comment by lee mingyu [ 14/Sep/18 ]

to be honest

I manage a variety of versions 

MongoDB 2.2.3
MongoDB 2.6
MongoDB 3.0
MongoDB 3.2
MongoDB 3.4
MongoDB 3.4.11
MongoDB 3.4.14
MongoDB 3.4.4
MongoDB 3.6.4
MongoDB 3.6.5

I tested connection  "/usr/bin/mongo HOST:PORT/admin -u dba -p'XXX' --quiet --eval "db.serverStatus().opcounters"   with 4.0.2 in all  above 3.0 version 
mongod is down only in 3.6.4

 

Comment by lee mingyu [ 14/Sep/18 ]

I do not test yet, I think there is no worthy for testing

because  I have managed variety mongo versions, I can not completely use same MongoDB shell version with MongoDB version
I want to know how I can get certificated matrix among different MongoDB version

I have a question
1) I did download  mongo shell from 
      https://repo.mongodb.org/yum/redhat/6/mongodb-org/4.0/x86_64/RPMS/ or  https://repo.mongodb.org/yum/redhat/6Server/mongodb-org/4.0/x86_64/RPMS/  
     
what is different between /6/ and /6Server/  ? 
      what does mean the 6 ?  (is the 6 os version? )

2) you can meet the same error if you test  (connect to 3.6.4 mongod with 4.0.2 mongo shell) 

 

Comment by Nick Brewer [ 13/Sep/18 ]

ee400 Thanks for your report. I'm curious to know if you encounter this issue when using the same (3.6.4) shell version? Additionally, it would be useful to see the output of:

db.adminCommand( { getCmdLineOpts: 1  } )

Thanks,
-Nick

Generated at Thu Feb 08 04:45:01 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.