2020-06-06T10:53:15.123+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:39446 #23 (7 connections now open) 2020-06-06T10:53:15.123+0800 I NETWORK [conn23] received client metadata from 10.10.*.*ip:39446 conn23: { driver: { name: "MongoDB Internal Client", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:53:15.186+0800 I ACCESS [conn23] Successfully authenticated as principal __system on local from client 10.10.*.*ip:39446 2020-06-06T10:53:15.187+0800 I NETWORK [conn23] end connection 10.10.*.*ip:39446 (6 connections now open) 2020-06-06T10:53:24.624+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46374 #24 (7 connections now open) 2020-06-06T10:53:24.624+0800 I NETWORK [conn24] received client metadata from 10.*.*.63ip:46374 conn24: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17" } 2020-06-06T10:53:24.624+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46376 #25 (8 connections now open) 2020-06-06T10:53:24.625+0800 I NETWORK [conn25] received client metadata from 10.*.*.63ip:46376 conn25: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17", application: { name: "mongorestore" } } 2020-06-06T10:53:24.625+0800 I SHARDING [conn25] Marking collection admin.system.users as collection version: 2020-06-06T10:53:24.641+0800 I ACCESS [conn25] Successfully authenticated as principal root on admin from client 10.*.*.63ip:46376 2020-06-06T10:53:24.714+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46378 #26 (9 connections now open) 2020-06-06T10:53:24.715+0800 I NETWORK [conn26] received client metadata from 10.*.*.63ip:46378 conn26: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17", application: { name: "mongorestore" } } 2020-06-06T10:53:24.743+0800 I ACCESS [conn26] Successfully authenticated as principal root on admin from client 10.*.*.63ip:46378 2020-06-06T10:53:24.826+0800 E STORAGE [conn26] WiredTiger error (22) [1591412004:826136][14303:0x7faab6cb3700], WT_SESSION.timestamp_transaction: __wt_txn_set_commit_timestamp, 683: commit timestamp (1590732316, 2) is less than the oldest timestamp (1591411996, 1): Invalid argument Raw: [1591412004:826136][14303:0x7faab6cb3700], WT_SESSION.timestamp_transaction: __wt_txn_set_commit_timestamp, 683: commit timestamp (1590732316, 2) is less than the oldest timestamp (1591411996, 1): Invalid argument 2020-06-06T10:53:24.826+0800 F - [conn26] Fatal assertion 39001 BadValue: timestamp_transaction 22: Invalid argument at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp 1323 2020-06-06T10:53:24.826+0800 F - [conn26] ***aborting after fassert() failure 2020-06-06T10:53:24.855+0800 F - [conn26] Got signal: 6 (Aborted). 0x559cebab59f1 0x559cebab500c 0x559cebab5096 0x7faacccff890 0x7faacc93ae97 0x7faacc93c801 0x559ce9f0d3a7 0x559ce9c468fd 0x559cea00b324 0x559cea7d8cb6 0x559cea7dacbc 0x559cea6d040c 0x559cea6d07f0 0x559cea6d0ea2 0x559cea6c2c7f 0x559cea6c0ded 0x559cea3848e8 0x559cea386b54 0x559cea3878ea 0x559cea37546c 0x559cea3811fc 0x559cea37e4df 0x559cea38016c 0x559ceb1db432 0x559cea37b3ad 0x559cea37cc23 0x559cea37d7d6 0x559cea37e43b 0x559cea38016c 0x559ceb1db89b 0x559ceb8311b5 0x559ceb831214 0x7faacccf46db 0x7faacca1d88f ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"559CE91F9000","o":"28BC9F1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"559CE91F9000","o":"28BC00C"},{"b":"559CE91F9000","o":"28BC096"},{"b":"7FAACCCED000","o":"12890"},{"b":"7FAACC8FC000","o":"3EE97","s":"gsignal"},{"b":"7FAACC8FC000","o":"40801","s":"abort"},{"b":"559CE91F9000","o":"D143A7","s":"_ZN5mongo42fassertFailedWithStatusNoTraceWithLocationEiRKNS_6StatusEPKcj"},{"b":"559CE91F9000","o":"A4D8FD"},{"b":"559CE91F9000","o":"E12324","s":"_ZN5mongo21WiredTigerRecordStore13insertRecordsEPNS_16OperationContextEPSt6vectorINS_6RecordESaIS4_EERKS3_INS_9TimestampESaIS8_EE"},{"b":"559CE91F9000","o":"15DFCB6","s":"_ZN5mongo14CollectionImpl16_insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugE"},{"b":"559CE91F9000","o":"15E1CBC","s":"_ZN5mongo14CollectionImpl15insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugEb"},{"b":"559CE91F9000","o":"14D740C"},{"b":"559CE91F9000","o":"14D77F0"},{"b":"559CE91F9000","o":"14D7EA2","s":"_ZN5mongo14performInsertsEPNS_16OperationContextERKNS_9write_ops6InsertEb"},{"b":"559CE91F9000","o":"14C9C7F"},{"b":"559CE91F9000","o":"14C7DED"},{"b":"559CE91F9000","o":"118B8E8"},{"b":"559CE91F9000","o":"118DB54"},{"b":"559CE91F9000","o":"118E8EA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"559CE91F9000","o":"117C46C","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"559CE91F9000","o":"11881FC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"559CE91F9000","o":"11854DF","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"559CE91F9000","o":"118716C"},{"b":"559CE91F9000","o":"1FE2432","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"559CE91F9000","o":"11823AD","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"559CE91F9000","o":"1183C23","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"559CE91F9000","o":"11847D6","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"559CE91F9000","o":"118543B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"559CE91F9000","o":"118716C"},{"b":"559CE91F9000","o":"1FE289B"},{"b":"559CE91F9000","o":"26381B5"},{"b":"559CE91F9000","o":"2638214"},{"b":"7FAACCCED000","o":"76DB"},{"b":"7FAACC8FC000","o":"12188F","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.7", "gitVersion" : "51d9fe12b5d19720e72dcd7db0f2f17dd9a19212", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "5.3.0-1019-aws", "version" : "#21~18.04.1-Ubuntu SMP Mon May 11 12:33:03 UTC 2020", "machine" : "x86_64" }, "somap" : [ { "b" : "559CE91F9000", "elfType" : 3, "buildId" : "1A92C545D9CCEA15DE44DF79F143CD0B8F3213EE" }, { "b" : "7FFCA1B9F000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "71160CE48D1E54E4B4B15BCC115EE418D74D17BE" }, { "b" : "7FAACE241000", "path" : "/usr/lib/x86_64-linux-gnu/libcurl.so.4", "elfType" : 3, "buildId" : "1C6BC2C0699CE0F7E848CA0B267E0CF07553F6AB" }, { "b" : "7FAACE026000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "390E9CC4C215314B6D8ADE6D6E28F8518418039C" }, { "b" : "7FAACDB5B000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1", "elfType" : 3, "buildId" : "812108064C0E61DF52271B3147248999619E7AFF" }, { "b" : "7FAACD8CE000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.1", "elfType" : 3, "buildId" : "881B3F4FDA206DF3909B141DC7410A08C2DD4B90" }, { "b" : "7FAACD6CA000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "25AD56E902E23B490A9CCDB08A9744D89CB95BCC" }, { "b" : "7FAACD4C2000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "9826FBDF57ED7D6965131074CB3C08B1009C1CD8" }, { "b" : "7FAACD124000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "A33761AB8FB485311B3C85BF4253099D7CABE653" }, { "b" : "7FAACCF0C000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "679F3AE11120EC7C483BC9295345D836F5C104F7" }, { "b" : "7FAACCCED000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "28C6AADE70B2D40D1F0F3D0A1A0CAD1AB816448F" }, { "b" : "7FAACC8FC000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "B417C0BA7CC5CF06D1D1BED6652CEDB9253C60D0" }, { "b" : "7FAACE4C0000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "64DF1B961228382FE18684249ED800AB1DCEAAD4" }, { "b" : "7FAACC6D7000", "path" : "/usr/lib/x86_64-linux-gnu/libnghttp2.so.14", "elfType" : 3, "buildId" : "4F00E5207693FDC249DA42EC6472ACA6A7B929AE" }, { "b" : "7FAACC4BA000", "path" : "/usr/lib/x86_64-linux-gnu/libidn2.so.0", "elfType" : 3, "buildId" : "EE6E9462BA2491F4EE8C4E52C3323274A9366614" }, { "b" : "7FAACC29E000", "path" : "/usr/lib/x86_64-linux-gnu/librtmp.so.1", "elfType" : 3, "buildId" : "69465D8AA6B19086ABF2455A703F9168BF82A69F" }, { "b" : "7FAACC090000", "path" : "/usr/lib/x86_64-linux-gnu/libpsl.so.5", "elfType" : 3, "buildId" : "CDAF1F1946846941F9D06414EC8C812D131A168E" }, { "b" : "7FAACBE45000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "00F419F64B0E70D8C5EEF7050369AA40B2A6E090" }, { "b" : "7FAACBBF3000", "path" : "/usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2", "elfType" : 3, "buildId" : "F406CC6CE3F33FC19238B7AC4A37D831F1BF0F06" }, { "b" : "7FAACB9E5000", "path" : "/usr/lib/x86_64-linux-gnu/liblber-2.4.so.2", "elfType" : 3, "buildId" : "8331348F98447F4D00BF7469EB6E2C27318B3EAC" }, { "b" : "7FAACB7C8000", "path" : "/lib/x86_64-linux-gnu/libz.so.1", "elfType" : 3, "buildId" : "EF3E006DFE3132A41D4D4DC0E407D6EA658E11C4" }, { "b" : "7FAACB44A000", "path" : "/usr/lib/x86_64-linux-gnu/libunistring.so.2", "elfType" : 3, "buildId" : "0E2784298E7D3F4D894FE130ACEFA77C3E624F72" }, { "b" : "7FAACB0E4000", "path" : "/usr/lib/x86_64-linux-gnu/libgnutls.so.30", "elfType" : 3, "buildId" : "1954013F9BBF9FF3A328A8E516481A2BBD8658EB" }, { "b" : "7FAACAEB0000", "path" : "/usr/lib/x86_64-linux-gnu/libhogweed.so.4", "elfType" : 3, "buildId" : "842BDF0B0EAAB82E19F1EABFC38769F4040FBE31" }, { "b" : "7FAACAC7A000", "path" : "/usr/lib/x86_64-linux-gnu/libnettle.so.6", "elfType" : 3, "buildId" : "C20D4B3BA13FCDCC3BF6857689BA9FC70BE3F6A5" }, { "b" : "7FAACA9F9000", "path" : "/usr/lib/x86_64-linux-gnu/libgmp.so.10", "elfType" : 3, "buildId" : "D40EA9B5EC5BC46799E4A412319617BD38BE9341" }, { "b" : "7FAACA723000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "69FBCF425EE6DF03DE93B82FBC2FC33790E68A96" }, { "b" : "7FAACA4F1000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3", "elfType" : 3, "buildId" : "F400D5D643A7F9696DF0E6148FA99BEE6C1BDDF7" }, { "b" : "7FAACA2ED000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "17107881DF65C66B4C6D38CAB37C285FA44663BD" }, { "b" : "7FAACA0E2000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "D78D71E8E016A534281B25B97CD7E5E9DB5FE00A" }, { "b" : "7FAAC9EC7000", "path" : "/usr/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "6E98533B96F674F77C1BD83AA3565D974D9C4372" }, { "b" : "7FAAC9C86000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi.so.3", "elfType" : 3, "buildId" : "A1A98DB481968073636BBAECB561A3EA8ED198AE" }, { "b" : "7FAAC9957000", "path" : "/usr/lib/x86_64-linux-gnu/libp11-kit.so.0", "elfType" : 3, "buildId" : "8DBD451EA5651283905E16FA7DFA9908688893A3" }, { "b" : "7FAAC9744000", "path" : "/usr/lib/x86_64-linux-gnu/libtasn1.so.6", "elfType" : 3, "buildId" : "6036B89A3BB671B32E01464C0C82BFA016186352" }, { "b" : "7FAAC9540000", "path" : "/lib/x86_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "F463E107B099910463BC32E837C73D341A52C27B" }, { "b" : "7FAAC9337000", "path" : "/usr/lib/x86_64-linux-gnu/libheimntlm.so.0", "elfType" : 3, "buildId" : "C2376C5B831991591F1A67B976758185F86896D8" }, { "b" : "7FAAC90AA000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.26", "elfType" : 3, "buildId" : "69BDEE5FA0FEEDF317308BE850F78761861D520A" }, { "b" : "7FAAC8E08000", "path" : "/usr/lib/x86_64-linux-gnu/libasn1.so.8", "elfType" : 3, "buildId" : "315D74995AAA32DE4D15BA25F335066988B1B230" }, { "b" : "7FAAC8BD2000", "path" : "/usr/lib/x86_64-linux-gnu/libhcrypto.so.4", "elfType" : 3, "buildId" : "6673972A1C24A89EBAFBAE696188A4CB26C6DDEB" }, { "b" : "7FAAC89BC000", "path" : "/usr/lib/x86_64-linux-gnu/libroken.so.18", "elfType" : 3, "buildId" : "430827C33259C12248CF44B91A9A9821114376F5" }, { "b" : "7FAAC87B4000", "path" : "/usr/lib/x86_64-linux-gnu/libffi.so.6", "elfType" : 3, "buildId" : "3555B5F599C9787DFDDBF9E8DF6F706B9044D985" }, { "b" : "7FAAC858B000", "path" : "/usr/lib/x86_64-linux-gnu/libwind.so.0", "elfType" : 3, "buildId" : "93A0931B1C2818F0EA224CE6FE5E31E84A9B55BB" }, { "b" : "7FAAC837C000", "path" : "/usr/lib/x86_64-linux-gnu/libheimbase.so.1", "elfType" : 3, "buildId" : "669D4CCE42FA4382796EFFCF0C16F459F4382C4C" }, { "b" : "7FAAC8132000", "path" : "/usr/lib/x86_64-linux-gnu/libhx509.so.5", "elfType" : 3, "buildId" : "4B80C543356EE0AF9039EFE7C9EA1CC1F74C426A" }, { "b" : "7FAAC7E29000", "path" : "/usr/lib/x86_64-linux-gnu/libsqlite3.so.0", "elfType" : 3, "buildId" : "66E2050B24B18B3F95154392560A17994D1690F2" }, { "b" : "7FAAC7BF1000", "path" : "/lib/x86_64-linux-gnu/libcrypt.so.1", "elfType" : 3, "buildId" : "810686AF0D5FD350A4FB1CC4B5AFF44A05C102CB" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x559cebab59f1] mongod(+0x28BC00C) [0x559cebab500c] mongod(+0x28BC096) [0x559cebab5096] libpthread.so.0(+0x12890) [0x7faacccff890] libc.so.6(gsignal+0xC7) [0x7faacc93ae97] libc.so.6(abort+0x141) [0x7faacc93c801] mongod(_ZN5mongo42fassertFailedWithStatusNoTraceWithLocationEiRKNS_6StatusEPKcj+0x0) [0x559ce9f0d3a7] mongod(+0xA4D8FD) [0x559ce9c468fd] mongod(_ZN5mongo21WiredTigerRecordStore13insertRecordsEPNS_16OperationContextEPSt6vectorINS_6RecordESaIS4_EERKS3_INS_9TimestampESaIS8_EE+0x34) [0x559cea00b324] mongod(_ZN5mongo14CollectionImpl16_insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugE+0x226) [0x559cea7d8cb6] mongod(_ZN5mongo14CollectionImpl15insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugEb+0x1EC) [0x559cea7dacbc] mongod(+0x14D740C) [0x559cea6d040c] mongod(+0x14D77F0) [0x559cea6d07f0] mongod(_ZN5mongo14performInsertsEPNS_16OperationContextERKNS_9write_ops6InsertEb+0x5F2) [0x559cea6d0ea2] mongod(+0x14C9C7F) [0x559cea6c2c7f] mongod(+0x14C7DED) [0x559cea6c0ded] mongod(+0x118B8E8) [0x559cea3848e8] mongod(+0x118DB54) [0x559cea386b54] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x41A) [0x559cea3878ea] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x559cea37546c] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x559cea3811fc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x559cea37e4df] mongod(+0x118716C) [0x559cea38016c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x559ceb1db432] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x559cea37b3ad] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x753) [0x559cea37cc23] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x316) [0x559cea37d7d6] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x559cea37e43b] mongod(+0x118716C) [0x559cea38016c] mongod(+0x1FE289B) [0x559ceb1db89b] mongod(+0x26381B5) [0x559ceb8311b5] mongod(+0x2638214) [0x559ceb831214] libpthread.so.0(+0x76DB) [0x7faacccf46db] libc.so.6(clone+0x3F) [0x7faacca1d88f] ----- END BACKTRACE ----- 2020-06-06T10:54:51.824+0800 I CONTROL [main] ***** SERVER RESTARTED ***** 2020-06-06T10:54:51.825+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' 2020-06-06T10:54:51.835+0800 W ASIO [main] No TransportLayer configured during NetworkInterface startup 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] MongoDB starting : pid=14500 port=27017 dbpath=/fmApplication/mongo-copytrade3/data 64-bit host=pro-copytrade3-mongo-0-63 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] db version v4.2.7 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] git version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] allocator: tcmalloc 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] modules: none 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] build environment: 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] distmod: ubuntu1804 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] distarch: x86_64 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] target_arch: x86_64 2020-06-06T10:54:51.907+0800 I CONTROL [initandlisten] options: { config: "/fmApplication/mongo-copytrade3/conf/mongod.conf", net: { bindIp: "*", port: 27017 }, processManagement: { fork: true, pidFilePath: "/fmApplication/mongo-copytrade3/mongod.pid", timeZoneInfo: "/usr/share/zoneinfo" }, replication: { replSetName: "mongo-copytrade3" }, security: { authorization: "enabled", clusterAuthMode: "keyFile", keyFile: "/fmApplication/mongo-copytrade3/repl_set.key" }, storage: { dbPath: "/fmApplication/mongo-copytrade3/data", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/fmApplication/mongo-copytrade3/logs/mongod.log" } } 2020-06-06T10:54:51.907+0800 W STORAGE [initandlisten] Detected unclean shutdown - /fmApplication/mongo-copytrade3/data/mongod.lock is not empty. 2020-06-06T10:54:51.907+0800 I STORAGE [initandlisten] Detected data files in /fmApplication/mongo-copytrade3/data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2020-06-06T10:54:51.907+0800 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2020-06-06T10:54:51.907+0800 I STORAGE [initandlisten] 2020-06-06T10:54:51.907+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2020-06-06T10:54:51.907+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2020-06-06T10:54:51.907+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3363M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], 2020-06-06T10:54:52.438+0800 I STORAGE [initandlisten] WiredTiger message [1591412092:438730][14500:0x7f5ca7344b00], txn-recover: Recovering log 6 through 7 2020-06-06T10:54:52.484+0800 I STORAGE [initandlisten] WiredTiger message [1591412092:484924][14500:0x7f5ca7344b00], txn-recover: Recovering log 7 through 7 2020-06-06T10:54:52.563+0800 I STORAGE [initandlisten] WiredTiger message [1591412092:563364][14500:0x7f5ca7344b00], txn-recover: Main recovery loop: starting at 6/23808 to 7/256 2020-06-06T10:54:52.563+0800 I STORAGE [initandlisten] WiredTiger message [1591412092:563711][14500:0x7f5ca7344b00], txn-recover: Recovering log 6 through 7 2020-06-06T10:54:52.619+0800 I STORAGE [initandlisten] WiredTiger message [1591412092:619400][14500:0x7f5ca7344b00], file:collection-10-2993614265812229004.wt, txn-recover: Recovering log 7 through 7 2020-06-06T10:54:52.664+0800 I STORAGE [initandlisten] WiredTiger message [1591412092:664350][14500:0x7f5ca7344b00], file:collection-10-2993614265812229004.wt, txn-recover: Set global recovery timestamp: (1591411951, 1) 2020-06-06T10:54:52.677+0800 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1591411951, 1) 2020-06-06T10:54:52.682+0800 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs 2020-06-06T10:54:52.682+0800 I STORAGE [initandlisten] The size storer reports that the oplog contains 149918 records totaling to 441427594 bytes 2020-06-06T10:54:52.682+0800 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation 2020-06-06T10:54:52.682+0800 I STORAGE [initandlisten] Sampling from the oplog between Jun 6 00:11:20:1 and Jun 6 10:53:21:1 to determine where to place markers for truncation 2020-06-06T10:54:52.682+0800 I STORAGE [initandlisten] Taking 89 samples and assuming that each section of oplog contains approximately 16841 records totaling to 49587655 bytes 2020-06-06T10:54:52.787+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 05:19:15:1496 2020-06-06T10:54:52.787+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 05:19:19:138 2020-06-06T10:54:52.787+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 05:19:23:49 2020-06-06T10:54:52.788+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 06:01:24:1727 2020-06-06T10:54:52.788+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 06:01:30:3662 2020-06-06T10:54:52.788+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 06:01:33:2025 2020-06-06T10:54:52.788+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 08:49:57:7759 2020-06-06T10:54:52.788+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 08:49:59:3014 2020-06-06T10:54:52.788+0800 I STORAGE [initandlisten] WiredTiger record store oplog processing took 105ms 2020-06-06T10:54:52.790+0800 I STORAGE [initandlisten] Timestamp monitor starting 2020-06-06T10:54:52.792+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-06-06T10:54:52.792+0800 I CONTROL [initandlisten] 2020-06-06T10:54:52.792+0800 I CONTROL [initandlisten] 2020-06-06T10:54:52.792+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 30826 processes, 655350 files. Number of processes should be at least 327675 : 0.5 times number of files. 2020-06-06T10:54:52.796+0800 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: 2020-06-06T10:54:52.800+0800 I STORAGE [initandlisten] Flow Control is enabled on this deployment. 2020-06-06T10:54:52.800+0800 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: 2020-06-06T10:54:52.800+0800 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: 2020-06-06T10:54:52.802+0800 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: 2020-06-06T10:54:52.802+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/fmApplication/mongo-copytrade3/data/diagnostic.data' 2020-06-06T10:54:52.803+0800 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: 2020-06-06T10:54:52.803+0800 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: 2020-06-06T10:54:52.807+0800 I REPL [initandlisten] Rollback ID is 2 2020-06-06T10:54:52.809+0800 I REPL [initandlisten] Recovering from stable timestamp: Timestamp(1591411951, 1) (top of oplog: { ts: Timestamp(1591412001, 1), t: 8 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0)) 2020-06-06T10:54:52.809+0800 I REPL [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1591411951, 1) 2020-06-06T10:54:52.809+0800 I REPL [initandlisten] Replaying stored operations from Timestamp(1591411951, 1) (inclusive) to Timestamp(1591412001, 1) (inclusive). 2020-06-06T10:54:52.809+0800 I SHARDING [initandlisten] Marking collection local.oplog.rs as collection version: 2020-06-06T10:54:52.810+0800 I REPL [initandlisten] Applied 5 operations in 1 batches. Last operation applied with optime: { ts: Timestamp(1591412001, 1), t: 8 } 2020-06-06T10:54:52.811+0800 I SHARDING [initandlisten] Marking collection config.transactions as collection version: 2020-06-06T10:54:52.813+0800 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured 2020-06-06T10:54:52.813+0800 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: 2020-06-06T10:54:52.813+0800 I CONTROL [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured 2020-06-06T10:54:52.813+0800 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock 2020-06-06T10:54:52.813+0800 I NETWORK [listener] Listening on 0.0.0.0 2020-06-06T10:54:52.813+0800 I NETWORK [listener] waiting for connections on port 27017 2020-06-06T10:54:52.919+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45282 #3 (1 connection now open) 2020-06-06T10:54:52.919+0800 I NETWORK [conn3] received client metadata from 10.*****.ip:45282 conn3: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:54:52.932+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45284 #4 (2 connections now open) 2020-06-06T10:54:52.932+0800 I NETWORK [conn4] received client metadata from 10.*****.ip:45284 conn4: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] ** WARNING: This replica set has a Primary-Secondary-Arbiter architecture, but readConcern:majority is enabled 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] ** for this node. This is not a recommended configuration. Please see 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] ** https://dochub.mongodb.org/core/psa-disable-rc-majority 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] New replica set config in use: { _id: "mongo-copytrade3", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "mongo-copytrade3-pri.followme.space:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 10.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo-copytrade3-sec.followme.space:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 5.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongo-****************:27017", arbiterOnly: true, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5eda6ea849ff46fdfdcfcfdd') } } 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] This node is mongo-copytrade3-pri.followme.space:27017 in the config 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] transition to STARTUP2 from STARTUP 2020-06-06T10:54:52.948+0800 I REPL [replexec-0] Starting replication storage threads 2020-06-06T10:54:52.948+0800 I CONNPOOL [Replication] Connecting to mongo-****************:27017 2020-06-06T10:54:52.949+0800 I CONNPOOL [Replication] Connecting to mongo-copytrade3-sec.followme.space:27017 2020-06-06T10:54:52.951+0800 I REPL [replexec-0] transition to RECOVERING from STARTUP2 2020-06-06T10:54:52.951+0800 I REPL [replexec-0] Starting replication fetcher thread 2020-06-06T10:54:52.951+0800 I REPL [replexec-0] Starting replication applier thread 2020-06-06T10:54:52.951+0800 I REPL [replexec-0] Starting replication reporter thread 2020-06-06T10:54:52.951+0800 I REPL [rsSync-0] Starting oplog application 2020-06-06T10:54:52.951+0800 I REPL [rsSync-0] transition to SECONDARY from RECOVERING 2020-06-06T10:54:52.951+0800 I REPL [rsSync-0] Resetting sync source to empty, which was :27017 2020-06-06T10:54:52.952+0800 I REPL [rsBackgroundSync] waiting for 4 pings from other members before syncing 2020-06-06T10:54:53.012+0800 I ACCESS [conn3] Successfully authenticated as principal __system on local from client 10.*****.ip:45282 2020-06-06T10:54:53.017+0800 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK 2020-06-06T10:54:53.041+0800 I ACCESS [conn4] Successfully authenticated as principal __system on local from client 10.*****.ip:45284 2020-06-06T10:54:53.042+0800 I REPL [replexec-1] Member mongo-****************:27017 is now in state ARBITER 2020-06-06T10:54:53.043+0800 I REPL [replexec-0] Member mongo-copytrade3-sec.followme.space:27017 is now in state PRIMARY 2020-06-06T10:54:53.043+0800 I ELECTION [replexec-0] Scheduling priority takeover at 2020-06-06T10:55:04.441+0800 2020-06-06T10:54:53.103+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:39986 #7 (3 connections now open) 2020-06-06T10:54:53.104+0800 I NETWORK [conn7] received client metadata from 10.10.*.*ip:39986 conn7: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:54:53.120+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:39988 #8 (4 connections now open) 2020-06-06T10:54:53.120+0800 I NETWORK [conn8] received client metadata from 10.10.*.*ip:39988 conn8: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:54:53.203+0800 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version: 2020-06-06T10:54:53.206+0800 I ACCESS [conn7] Successfully authenticated as principal __system on local from client 10.10.*.*ip:39986 2020-06-06T10:54:53.223+0800 I ACCESS [conn8] Successfully authenticated as principal __system on local from client 10.10.*.*ip:39988 2020-06-06T10:54:53.952+0800 I REPL [rsBackgroundSync] sync source candidate: mongo-copytrade3-sec.followme.space:27017 2020-06-06T10:54:53.952+0800 I CONNPOOL [RS] Connecting to mongo-copytrade3-sec.followme.space:27017 2020-06-06T10:54:53.966+0800 I REPL [rsBackgroundSync] Changed sync source from empty to mongo-copytrade3-sec.followme.space:27017 2020-06-06T10:54:53.968+0800 I SHARDING [rsSync-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version: 2020-06-06T10:54:55.299+0800 I NETWORK [listener] connection accepted from 172.30.0.5:61259 #11 (5 connections now open) 2020-06-06T10:54:55.300+0800 I NETWORK [conn11] received client metadata from 172.30.0.5:61259 conn11: { application: { name: "Navicat" }, driver: { name: "mongoc", version: "1.14.0" }, os: { type: "Darwin", name: "macOS", version: "19.4.0", architecture: "x86_64" }, platform: "cfg=0x00d6a0e9 posix=200112 stdc=201112 CC=clang 7.0.0 (clang-700.1.76) CFLAGS="" LDFLAGS=""" } 2020-06-06T10:55:04.441+0800 I REPL [replexec-0] Canceling priority takeover callback 2020-06-06T10:55:04.441+0800 I ELECTION [replexec-0] Starting an election for a priority takeover 2020-06-06T10:55:04.441+0800 I ELECTION [replexec-0] conducting a dry run election to see if we could be elected. current term: 9 2020-06-06T10:55:04.441+0800 I REPL [replexec-0] Scheduling remote command request for vote request: RemoteCommand 31 -- target:mongo-copytrade3-sec.followme.space:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: true, term: 9, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412095, 1), t: 9 } } 2020-06-06T10:55:04.441+0800 I REPL [replexec-0] Scheduling remote command request for vote request: RemoteCommand 32 -- target:mongo-****************:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: true, term: 9, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412095, 1), t: 9 } } 2020-06-06T10:55:04.441+0800 I ELECTION [replexec-1] VoteRequester(term 9 dry run) received a yes vote from mongo-copytrade3-sec.followme.space:27017; response message: { term: 9, voteGranted: true, reason: "", ok: 1.0, $clusterTime: { clusterTime: Timestamp(1591412095, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1591412095, 1) } 2020-06-06T10:55:04.441+0800 I ELECTION [replexec-0] dry election run succeeded, running for election in term 10 2020-06-06T10:55:04.442+0800 I CONNPOOL [Replication] Ending connection to host mongo-****************:27017 due to bad connection status: CallbackCanceled: Callback was canceled; 0 connections to that host remain open 2020-06-06T10:55:04.442+0800 I CONNPOOL [Replication] Connecting to mongo-****************:27017 2020-06-06T10:55:04.442+0800 I REPL [replexec-0] Scheduling remote command request for vote request: RemoteCommand 33 -- target:mongo-copytrade3-sec.followme.space:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: false, term: 10, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412095, 1), t: 9 } } 2020-06-06T10:55:04.442+0800 I REPL [replexec-0] Scheduling remote command request for vote request: RemoteCommand 34 -- target:mongo-****************:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: false, term: 10, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412095, 1), t: 9 } } 2020-06-06T10:55:04.444+0800 I ELECTION [replexec-1] VoteRequester(term 10) received a yes vote from mongo-copytrade3-sec.followme.space:27017; response message: { term: 10, voteGranted: true, reason: "", ok: 1.0, $clusterTime: { clusterTime: Timestamp(1591412095, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1591412095, 1) } 2020-06-06T10:55:04.444+0800 I ELECTION [replexec-1] election succeeded, assuming primary role in term 10 2020-06-06T10:55:04.444+0800 I REPL [replexec-1] transition to PRIMARY from SECONDARY 2020-06-06T10:55:04.444+0800 I REPL [replexec-1] Resetting sync source to empty, which was mongo-copytrade3-sec.followme.space:27017 2020-06-06T10:55:04.444+0800 I REPL [replexec-1] Entering primary catch-up mode. 2020-06-06T10:55:04.505+0800 I REPL [replexec-1] Member mongo-copytrade3-sec.followme.space:27017 is now in state SECONDARY 2020-06-06T10:55:04.568+0800 I REPL [replexec-2] Caught up to the latest optime known via heartbeats after becoming primary. Target optime: { ts: Timestamp(1591412095, 1), t: 9 }. My Last Applied: { ts: Timestamp(1591412095, 1), t: 9 } 2020-06-06T10:55:04.568+0800 I REPL [replexec-2] Exited primary catch-up mode. 2020-06-06T10:55:04.568+0800 I REPL [replexec-2] Stopping replication producer 2020-06-06T10:55:04.568+0800 I REPL [rsBackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source. 2020-06-06T10:55:04.568+0800 I REPL [ReplBatcher] Oplog buffer has been drained in term 10 2020-06-06T10:55:04.569+0800 I CONNPOOL [RS] Ending connection to host mongo-copytrade3-sec.followme.space:27017 due to bad connection status: CallbackCanceled: Callback was canceled; 1 connections to that host remain open 2020-06-06T10:55:04.569+0800 I REPL [RstlKillOpThread] Starting to kill user operations 2020-06-06T10:55:04.569+0800 I REPL [RstlKillOpThread] Stopped killing user operations 2020-06-06T10:55:04.569+0800 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepUp", userOpsKilled: 0, userOpsRunning: 0 } 2020-06-06T10:55:04.569+0800 I REPL [rsSync-0] transition to primary complete; database writes are now permitted 2020-06-06T10:55:05.003+0800 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongo-copytrade3-sec.followme.space:27017: InvalidSyncSource: Sync source was cleared. Was mongo-copytrade3-sec.followme.space:27017 2020-06-06T10:55:05.446+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45288 #14 (6 connections now open) 2020-06-06T10:55:05.446+0800 I NETWORK [conn14] received client metadata from 10.*****.ip:45288 conn14: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:55:05.509+0800 I ACCESS [conn14] Successfully authenticated as principal __system on local from client 10.*****.ip:45288 2020-06-06T10:56:04.568+0800 I CONNPOOL [Replication] Ending idle connection to host mongo-****************:27017 because the pool meets constraints; 1 connections to that host remain open 2020-06-06T10:56:38.797+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45330 #15 (7 connections now open) 2020-06-06T10:56:38.797+0800 I NETWORK [conn15] received client metadata from 10.*****.ip:45330 conn15: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:56:38.860+0800 I ACCESS [conn15] Successfully authenticated as principal __system on local from client 10.*****.ip:45330 2020-06-06T10:56:38.862+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45332 #16 (8 connections now open) 2020-06-06T10:56:38.862+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45334 #17 (9 connections now open) 2020-06-06T10:56:38.862+0800 I NETWORK [conn16] received client metadata from 10.*****.ip:45332 conn16: { driver: { name: "MongoDB Internal Client", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:56:38.862+0800 I NETWORK [conn17] received client metadata from 10.*****.ip:45334 conn17: { driver: { name: "MongoDB Internal Client", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:56:38.980+0800 I ACCESS [conn17] Successfully authenticated as principal __system on local from client 10.*****.ip:45334 2020-06-06T10:56:38.980+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T10:56:39.044+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T10:56:39.114+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T10:56:39.177+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T10:58:15.123+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:40028 #18 (10 connections now open) 2020-06-06T10:58:15.123+0800 I NETWORK [conn18] received client metadata from 10.10.*.*ip:40028 conn18: { driver: { name: "MongoDB Internal Client", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:58:15.186+0800 I ACCESS [conn18] Successfully authenticated as principal __system on local from client 10.10.*.*ip:40028 2020-06-06T10:58:15.187+0800 I NETWORK [conn18] end connection 10.10.*.*ip:40028 (9 connections now open) 2020-06-06T10:58:56.822+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46446 #19 (10 connections now open) 2020-06-06T10:58:56.822+0800 I NETWORK [conn19] received client metadata from 10.*.*.63ip:46446 conn19: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17" } 2020-06-06T10:58:56.823+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46448 #20 (11 connections now open) 2020-06-06T10:58:56.823+0800 I NETWORK [conn20] received client metadata from 10.*.*.63ip:46448 conn20: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17", application: { name: "mongorestore" } } 2020-06-06T10:58:56.823+0800 I SHARDING [conn20] Marking collection admin.system.users as collection version: 2020-06-06T10:58:56.839+0800 I ACCESS [conn20] Successfully authenticated as principal root on admin from client 10.*.*.63ip:46448 2020-06-06T10:58:56.892+0800 I SHARDING [conn20] Marking collection trade-engine.sync as collection version: 2020-06-06T10:58:57.654+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45354 #21 (12 connections now open) 2020-06-06T10:58:57.655+0800 I NETWORK [conn21] received client metadata from 10.*****.ip:45354 conn21: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T10:58:57.717+0800 I ACCESS [conn21] Successfully authenticated as principal __system on local from client 10.*****.ip:45354 2020-06-06T10:58:57.792+0800 I COMMAND [conn20] command trade-engine.sync appName: "mongorestore" command: insert { insert: "sync", ordered: false, writeConcern: { w: "majority" }, lsid: { id: UUID("fe70be1e-2c89-49ff-b6af-6fd7260c47e7") }, $clusterTime: { clusterTime: Timestamp(1591412337, 6451), signature: { hash: BinData(0, 5FA25FAF79B1D4EE21078E4212542AAECCA1DB44), keyId: 6834897099566350339 } }, $db: "trade-engine" } ninserted:1000 keysInserted:1000 numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 21 } }, ReplicationStateTransition: { acquireCount: { w: 21 } }, Global: { acquireCount: { w: 21 } }, Database: { acquireCount: { w: 21 } }, Collection: { acquireCount: { w: 21 } }, Mutex: { acquireCount: { r: 1021 } } } flowControl:{ acquireCount: 21, timeAcquiringMicros: 11 } storage:{} protocol:op_msg 111ms 2020-06-06T10:58:59.788+0800 I NETWORK [conn19] end connection 10.*.*.63ip:46446 (11 connections now open) 2020-06-06T10:58:59.788+0800 I NETWORK [conn20] end connection 10.*.*.63ip:46448 (10 connections now open) 2020-06-06T10:59:38.429+0800 I NETWORK [listener] connection accepted from 172.30.0.5:61360 #22 (11 connections now open) 2020-06-06T10:59:38.483+0800 I NETWORK [conn22] received client metadata from 172.30.0.5:61360 conn22: { application: { name: "Navicat" }, driver: { name: "mongoc", version: "1.14.0" }, os: { type: "Darwin", name: "macOS", version: "19.4.0", architecture: "x86_64" }, platform: "cfg=0x00d6a0e9 posix=200112 stdc=201112 CC=clang 7.0.0 (clang-700.1.76) CFLAGS="" LDFLAGS=""" } 2020-06-06T10:59:38.709+0800 I ACCESS [conn22] Successfully authenticated as principal root on admin from client 172.30.0.5:61360 2020-06-06T10:59:42.411+0800 I COMMAND [conn22] dropDatabase trade-engine - starting 2020-06-06T10:59:42.411+0800 I COMMAND [conn22] dropDatabase trade-engine - dropping collection: trade-engine.sync 2020-06-06T10:59:42.411+0800 I STORAGE [conn22] dropCollection: trade-engine.sync (04bd64e4-8cc5-42f0-8cd9-6958fdd306b6) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0) 2020-06-06T10:59:42.411+0800 I STORAGE [conn22] Finishing collection drop for trade-engine.sync (04bd64e4-8cc5-42f0-8cd9-6958fdd306b6). 2020-06-06T10:59:42.411+0800 I STORAGE [conn22] Deferring table drop for index '_id_' on collection 'trade-engine.sync.$_id_ (04bd64e4-8cc5-42f0-8cd9-6958fdd306b6)'. Ident: 'index-1-9156903440974637499', commit timestamp: 'Timestamp(1591412382, 1)' 2020-06-06T10:59:42.411+0800 I STORAGE [conn22] Deferring table drop for collection 'trade-engine.sync' (04bd64e4-8cc5-42f0-8cd9-6958fdd306b6). Ident: collection-0-9156903440974637499, commit timestamp: Timestamp(1591412382, 1) 2020-06-06T10:59:42.411+0800 I COMMAND [conn22] dropDatabase trade-engine waiting for { ts: Timestamp(1591412382, 1), t: 10 } to be replicated at { w: "majority", wtimeout: 600000 }. Dropping 1 collection(s), with last collection drop at { ts: Timestamp(0, 0), t: -1 } 2020-06-06T10:59:42.438+0800 I COMMAND [conn22] dropDatabase trade-engine - successfully dropped 1 collection(s) (most recent drop optime: { ts: Timestamp(1591412382, 1), t: 10 }) after 26ms. dropping database 2020-06-06T10:59:42.438+0800 I COMMAND [conn22] dropDatabase trade-engine - dropped 1 collection(s) 2020-06-06T10:59:42.438+0800 I COMMAND [conn22] dropDatabase trade-engine - finished 2020-06-06T10:59:53.790+0800 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1591412382, 2) 2020-06-06T10:59:53.790+0800 I STORAGE [TimestampMonitor] Completing drop for ident index-1-9156903440974637499 (ns: trade-engine.sync.$_id_) with drop timestamp Timestamp(1591412382, 1) 2020-06-06T10:59:53.791+0800 I STORAGE [TimestampMonitor] Completing drop for ident collection-0-9156903440974637499 (ns: trade-engine.sync) with drop timestamp Timestamp(1591412382, 1) 2020-06-06T10:59:54.568+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46462 #23 (12 connections now open) 2020-06-06T10:59:54.568+0800 I NETWORK [conn23] received client metadata from 10.*.*.63ip:46462 conn23: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17" } 2020-06-06T10:59:54.569+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46464 #24 (13 connections now open) 2020-06-06T10:59:54.569+0800 I NETWORK [conn24] received client metadata from 10.*.*.63ip:46464 conn24: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17", application: { name: "mongorestore" } } 2020-06-06T10:59:54.584+0800 I ACCESS [conn24] Successfully authenticated as principal root on admin from client 10.*.*.63ip:46464 2020-06-06T10:59:54.587+0800 I STORAGE [conn24] createCollection: trade-engine.sync with generated UUID: c442ab99-6b7b-4953-91dd-186e0799ed3f and options: {} 2020-06-06T10:59:54.620+0800 I INDEX [conn24] index build: done building index _id_ on ns trade-engine.sync 2020-06-06T10:59:57.720+0800 I COMMAND [conn24] command trade-engine.sync appName: "mongorestore" command: insert { insert: "sync", ordered: false, writeConcern: { w: "majority" }, lsid: { id: UUID("336c7fdc-f6ce-4c87-a779-65cf29e4656f") }, $clusterTime: { clusterTime: Timestamp(1591412397, 8000), signature: { hash: BinData(0, 112AC208C04E10AB21194E6386BB58A3082E725C), keyId: 6834897099566350339 } }, $db: "trade-engine" } ninserted:1000 keysInserted:1000 numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 19 } }, ReplicationStateTransition: { acquireCount: { w: 19 } }, Global: { acquireCount: { w: 19 } }, Database: { acquireCount: { w: 19 } }, Collection: { acquireCount: { w: 19 } }, Mutex: { acquireCount: { r: 1019 } } } flowControl:{ acquireCount: 19, timeAcquiringMicros: 13 } storage:{} protocol:op_msg 123ms 2020-06-06T10:59:57.899+0800 I COMMAND [conn24] command trade-engine.sync appName: "mongorestore" command: insert { insert: "sync", ordered: false, writeConcern: { w: "majority" }, lsid: { id: UUID("336c7fdc-f6ce-4c87-a779-65cf29e4656f") }, $clusterTime: { clusterTime: Timestamp(1591412397, 9000), signature: { hash: BinData(0, 112AC208C04E10AB21194E6386BB58A3082E725C), keyId: 6834897099566350339 } }, $db: "trade-engine" } ninserted:1000 keysInserted:1000 numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 20 } }, ReplicationStateTransition: { acquireCount: { w: 20 } }, Global: { acquireCount: { w: 20 } }, Database: { acquireCount: { w: 20 } }, Collection: { acquireCount: { w: 20 } }, Mutex: { acquireCount: { r: 1020 } } } flowControl:{ acquireCount: 20, timeAcquiringMicros: 7 } storage:{} protocol:op_msg 103ms 2020-06-06T10:59:58.194+0800 I INDEX [conn24] index build: starting on trade-engine.sync properties: { v: 2, unique: true, key: { Account: 1.0, BrokerID: 1.0 }, name: "Account_1_BrokerID_1", ns: "trade-engine.sync", background: true } using method: Hybrid 2020-06-06T10:59:58.194+0800 I INDEX [conn24] build may temporarily use up to 200 megabytes of RAM 2020-06-06T10:59:58.251+0800 I INDEX [conn24] index build: collection scan done. scanned 48688 total records in 0 seconds 2020-06-06T10:59:58.309+0800 I INDEX [conn24] index build: inserted 48688 keys from external sorter into index in 0 seconds 2020-06-06T10:59:58.315+0800 I INDEX [conn24] index build: done building index Account_1_BrokerID_1 on ns trade-engine.sync 2020-06-06T10:59:58.319+0800 I COMMAND [conn24] command trade-engine.sync appName: "mongorestore" command: createIndexes { createIndexes: "sync", indexes: [ { key: { Account: 1.0, BrokerID: 1.0 }, unique: true, name: "Account_1_BrokerID_1", ns: "trade-engine.sync", background: true } ], lsid: { id: UUID("336c7fdc-f6ce-4c87-a779-65cf29e4656f") }, $clusterTime: { clusterTime: Timestamp(1591412398, 1688), signature: { hash: BinData(0, 2779110615100C9C52703DEC2D190B86C94BE6FA), keyId: 6834897099566350339 } }, $db: "trade-engine", $readPreference: { mode: "primaryPreferred" } } numYields:380 reslen:239 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 382 } }, ReplicationStateTransition: { acquireCount: { w: 383 } }, Global: { acquireCount: { r: 1, w: 382 } }, Database: { acquireCount: { r: 1, w: 382 } }, Collection: { acquireCount: { r: 382, w: 1, R: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 381, timeAcquiringMicros: 42 } storage:{} protocol:op_msg 138ms 2020-06-06T10:59:58.323+0800 I SHARDING [conn24] Marking collection admin.tempusers as collection version: 2020-06-06T10:59:58.323+0800 I STORAGE [conn24] createCollection: admin.tempusers with generated UUID: 99c989f1-351c-413e-8321-cb3c17c62897 and options: {} 2020-06-06T10:59:58.331+0800 I INDEX [conn24] index build: done building index _id_ on ns admin.tempusers 2020-06-06T10:59:58.371+0800 W ACCESS [conn24] Could not insert user root@admin in _mergeAuthzCollections command: Location51003: User "root@admin" already exists 2020-06-06T10:59:58.372+0800 W ACCESS [conn24] Could not insert user test_read@admin in _mergeAuthzCollections command: Location51003: User "test_read@admin" already exists 2020-06-06T10:59:58.372+0800 I COMMAND [conn24] CMD: drop admin.tempusers 2020-06-06T10:59:58.372+0800 I STORAGE [conn24] dropCollection: admin.tempusers (99c989f1-351c-413e-8321-cb3c17c62897) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0) 2020-06-06T10:59:58.372+0800 I STORAGE [conn24] Finishing collection drop for admin.tempusers (99c989f1-351c-413e-8321-cb3c17c62897). 2020-06-06T10:59:58.373+0800 I STORAGE [conn24] Deferring table drop for index '_id_' on collection 'admin.tempusers.$_id_ (99c989f1-351c-413e-8321-cb3c17c62897)'. Ident: 'index-6--4200199280753805755', commit timestamp: 'Timestamp(1591412398, 1696)' 2020-06-06T10:59:58.373+0800 I STORAGE [conn24] Deferring table drop for collection 'admin.tempusers' (99c989f1-351c-413e-8321-cb3c17c62897). Ident: collection-5--4200199280753805755, commit timestamp: Timestamp(1591412398, 1696) 2020-06-06T10:59:58.379+0800 I NETWORK [conn24] end connection 10.*.*.63ip:46464 (12 connections now open) 2020-06-06T10:59:58.379+0800 I NETWORK [conn23] end connection 10.*.*.63ip:46462 (11 connections now open) 2020-06-06T11:00:04.569+0800 I CONNPOOL [RS] Dropping all pooled connections to mongo-copytrade3-sec.followme.space:27017 due to ShutdownInProgress: Pool for mongo-copytrade3-sec.followme.space:27017 has expired. 2020-06-06T11:00:54.795+0800 I STORAGE [TimestampMonitor] Removing drop-pending idents with drop timestamps before timestamp Timestamp(1591412444, 1) 2020-06-06T11:00:54.795+0800 I STORAGE [TimestampMonitor] Completing drop for ident index-6--4200199280753805755 (ns: admin.tempusers.$_id_) with drop timestamp Timestamp(1591412398, 1696) 2020-06-06T11:00:54.796+0800 I STORAGE [TimestampMonitor] Completing drop for ident collection-5--4200199280753805755 (ns: admin.tempusers) with drop timestamp Timestamp(1591412398, 1696) 2020-06-06T11:00:58.170+0800 I NETWORK [conn4] end connection 10.*****.ip:45284 (10 connections now open) 2020-06-06T11:01:38.901+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T11:01:38.901+0800 I ACCESS [conn17] Successfully authenticated as principal __system on local from client 10.*****.ip:45334 2020-06-06T11:01:38.964+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T11:01:39.030+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T11:01:39.093+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.*****.ip:45332 2020-06-06T11:02:36.661+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46490 #25 (11 connections now open) 2020-06-06T11:02:36.662+0800 I NETWORK [conn25] received client metadata from 10.*.*.63ip:46490 conn25: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17" } 2020-06-06T11:02:36.662+0800 I NETWORK [listener] connection accepted from 10.*.*.63ip:46492 #26 (12 connections now open) 2020-06-06T11:02:36.662+0800 I NETWORK [conn26] received client metadata from 10.*.*.63ip:46492 conn26: { driver: { name: "mongo-go-driver", version: "v1.2.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.17", application: { name: "mongorestore" } } 2020-06-06T11:02:36.677+0800 I ACCESS [conn26] Successfully authenticated as principal root on admin from client 10.*.*.63ip:46492 2020-06-06T11:02:36.770+0800 E STORAGE [conn26] WiredTiger error (22) [1591412556:770705][14500:0x7f5c8ad3c700], WT_SESSION.timestamp_transaction: __wt_txn_set_commit_timestamp, 683: commit timestamp (1590732316, 2) is less than the oldest timestamp (1591412554, 1): Invalid argument Raw: [1591412556:770705][14500:0x7f5c8ad3c700], WT_SESSION.timestamp_transaction: __wt_txn_set_commit_timestamp, 683: commit timestamp (1590732316, 2) is less than the oldest timestamp (1591412554, 1): Invalid argument 2020-06-06T11:02:36.770+0800 F - [conn26] Fatal assertion 39001 BadValue: timestamp_transaction 22: Invalid argument at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp 1323 2020-06-06T11:02:36.770+0800 F - [conn26] ***aborting after fassert() failure 2020-06-06T11:02:36.793+0800 F - [conn26] Got signal: 6 (Aborted). 0x5561aa1779f1 0x5561aa17700c 0x5561aa177096 0x7f5ca5979890 0x7f5ca55b4e97 0x7f5ca55b6801 0x5561a85cf3a7 0x5561a83088fd 0x5561a86cd324 0x5561a8e9acb6 0x5561a8e9ccbc 0x5561a8d9240c 0x5561a8d927f0 0x5561a8d92ea2 0x5561a8d84c7f 0x5561a8d82ded 0x5561a8a468e8 0x5561a8a48b54 0x5561a8a498ea 0x5561a8a3746c 0x5561a8a431fc 0x5561a8a404df 0x5561a8a4216c 0x5561a989d432 0x5561a8a3d3ad 0x5561a8a3ec23 0x5561a8a3f7d6 0x5561a8a4043b 0x5561a8a4216c 0x5561a989d89b 0x5561a9ef31b5 0x5561a9ef3214 0x7f5ca596e6db 0x7f5ca569788f ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"5561A78BB000","o":"28BC9F1","s":"_ZN5mongo15printStackTraceERSo"},{"b":"5561A78BB000","o":"28BC00C"},{"b":"5561A78BB000","o":"28BC096"},{"b":"7F5CA5967000","o":"12890"},{"b":"7F5CA5576000","o":"3EE97","s":"gsignal"},{"b":"7F5CA5576000","o":"40801","s":"abort"},{"b":"5561A78BB000","o":"D143A7","s":"_ZN5mongo42fassertFailedWithStatusNoTraceWithLocationEiRKNS_6StatusEPKcj"},{"b":"5561A78BB000","o":"A4D8FD"},{"b":"5561A78BB000","o":"E12324","s":"_ZN5mongo21WiredTigerRecordStore13insertRecordsEPNS_16OperationContextEPSt6vectorINS_6RecordESaIS4_EERKS3_INS_9TimestampESaIS8_EE"},{"b":"5561A78BB000","o":"15DFCB6","s":"_ZN5mongo14CollectionImpl16_insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugE"},{"b":"5561A78BB000","o":"15E1CBC","s":"_ZN5mongo14CollectionImpl15insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugEb"},{"b":"5561A78BB000","o":"14D740C"},{"b":"5561A78BB000","o":"14D77F0"},{"b":"5561A78BB000","o":"14D7EA2","s":"_ZN5mongo14performInsertsEPNS_16OperationContextERKNS_9write_ops6InsertEb"},{"b":"5561A78BB000","o":"14C9C7F"},{"b":"5561A78BB000","o":"14C7DED"},{"b":"5561A78BB000","o":"118B8E8"},{"b":"5561A78BB000","o":"118DB54"},{"b":"5561A78BB000","o":"118E8EA","s":"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE"},{"b":"5561A78BB000","o":"117C46C","s":"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE"},{"b":"5561A78BB000","o":"11881FC","s":"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE"},{"b":"5561A78BB000","o":"11854DF","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"5561A78BB000","o":"118716C"},{"b":"5561A78BB000","o":"1FE2432","s":"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE"},{"b":"5561A78BB000","o":"11823AD","s":"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE"},{"b":"5561A78BB000","o":"1183C23","s":"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE"},{"b":"5561A78BB000","o":"11847D6","s":"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE"},{"b":"5561A78BB000","o":"118543B","s":"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE"},{"b":"5561A78BB000","o":"118716C"},{"b":"5561A78BB000","o":"1FE289B"},{"b":"5561A78BB000","o":"26381B5"},{"b":"5561A78BB000","o":"2638214"},{"b":"7F5CA5967000","o":"76DB"},{"b":"7F5CA5576000","o":"12188F","s":"clone"}],"processInfo":{ "mongodbVersion" : "4.2.7", "gitVersion" : "51d9fe12b5d19720e72dcd7db0f2f17dd9a19212", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "5.3.0-1019-aws", "version" : "#21~18.04.1-Ubuntu SMP Mon May 11 12:33:03 UTC 2020", "machine" : "x86_64" }, "somap" : [ { "b" : "5561A78BB000", "elfType" : 3, "buildId" : "1A92C545D9CCEA15DE44DF79F143CD0B8F3213EE" }, { "b" : "7FFE6C3F0000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "71160CE48D1E54E4B4B15BCC115EE418D74D17BE" }, { "b" : "7F5CA6EBB000", "path" : "/usr/lib/x86_64-linux-gnu/libcurl.so.4", "elfType" : 3, "buildId" : "1C6BC2C0699CE0F7E848CA0B267E0CF07553F6AB" }, { "b" : "7F5CA6CA0000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "390E9CC4C215314B6D8ADE6D6E28F8518418039C" }, { "b" : "7F5CA67D5000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1", "elfType" : 3, "buildId" : "812108064C0E61DF52271B3147248999619E7AFF" }, { "b" : "7F5CA6548000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.1", "elfType" : 3, "buildId" : "881B3F4FDA206DF3909B141DC7410A08C2DD4B90" }, { "b" : "7F5CA6344000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "25AD56E902E23B490A9CCDB08A9744D89CB95BCC" }, { "b" : "7F5CA613C000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "9826FBDF57ED7D6965131074CB3C08B1009C1CD8" }, { "b" : "7F5CA5D9E000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "A33761AB8FB485311B3C85BF4253099D7CABE653" }, { "b" : "7F5CA5B86000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "679F3AE11120EC7C483BC9295345D836F5C104F7" }, { "b" : "7F5CA5967000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "28C6AADE70B2D40D1F0F3D0A1A0CAD1AB816448F" }, { "b" : "7F5CA5576000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "B417C0BA7CC5CF06D1D1BED6652CEDB9253C60D0" }, { "b" : "7F5CA713A000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "64DF1B961228382FE18684249ED800AB1DCEAAD4" }, { "b" : "7F5CA5351000", "path" : "/usr/lib/x86_64-linux-gnu/libnghttp2.so.14", "elfType" : 3, "buildId" : "4F00E5207693FDC249DA42EC6472ACA6A7B929AE" }, { "b" : "7F5CA5134000", "path" : "/usr/lib/x86_64-linux-gnu/libidn2.so.0", "elfType" : 3, "buildId" : "EE6E9462BA2491F4EE8C4E52C3323274A9366614" }, { "b" : "7F5CA4F18000", "path" : "/usr/lib/x86_64-linux-gnu/librtmp.so.1", "elfType" : 3, "buildId" : "69465D8AA6B19086ABF2455A703F9168BF82A69F" }, { "b" : "7F5CA4D0A000", "path" : "/usr/lib/x86_64-linux-gnu/libpsl.so.5", "elfType" : 3, "buildId" : "CDAF1F1946846941F9D06414EC8C812D131A168E" }, { "b" : "7F5CA4ABF000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "00F419F64B0E70D8C5EEF7050369AA40B2A6E090" }, { "b" : "7F5CA486D000", "path" : "/usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2", "elfType" : 3, "buildId" : "F406CC6CE3F33FC19238B7AC4A37D831F1BF0F06" }, { "b" : "7F5CA465F000", "path" : "/usr/lib/x86_64-linux-gnu/liblber-2.4.so.2", "elfType" : 3, "buildId" : "8331348F98447F4D00BF7469EB6E2C27318B3EAC" }, { "b" : "7F5CA4442000", "path" : "/lib/x86_64-linux-gnu/libz.so.1", "elfType" : 3, "buildId" : "EF3E006DFE3132A41D4D4DC0E407D6EA658E11C4" }, { "b" : "7F5CA40C4000", "path" : "/usr/lib/x86_64-linux-gnu/libunistring.so.2", "elfType" : 3, "buildId" : "0E2784298E7D3F4D894FE130ACEFA77C3E624F72" }, { "b" : "7F5CA3D5E000", "path" : "/usr/lib/x86_64-linux-gnu/libgnutls.so.30", "elfType" : 3, "buildId" : "1954013F9BBF9FF3A328A8E516481A2BBD8658EB" }, { "b" : "7F5CA3B2A000", "path" : "/usr/lib/x86_64-linux-gnu/libhogweed.so.4", "elfType" : 3, "buildId" : "842BDF0B0EAAB82E19F1EABFC38769F4040FBE31" }, { "b" : "7F5CA38F4000", "path" : "/usr/lib/x86_64-linux-gnu/libnettle.so.6", "elfType" : 3, "buildId" : "C20D4B3BA13FCDCC3BF6857689BA9FC70BE3F6A5" }, { "b" : "7F5CA3673000", "path" : "/usr/lib/x86_64-linux-gnu/libgmp.so.10", "elfType" : 3, "buildId" : "D40EA9B5EC5BC46799E4A412319617BD38BE9341" }, { "b" : "7F5CA339D000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "69FBCF425EE6DF03DE93B82FBC2FC33790E68A96" }, { "b" : "7F5CA316B000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3", "elfType" : 3, "buildId" : "F400D5D643A7F9696DF0E6148FA99BEE6C1BDDF7" }, { "b" : "7F5CA2F67000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "17107881DF65C66B4C6D38CAB37C285FA44663BD" }, { "b" : "7F5CA2D5C000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "D78D71E8E016A534281B25B97CD7E5E9DB5FE00A" }, { "b" : "7F5CA2B41000", "path" : "/usr/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "6E98533B96F674F77C1BD83AA3565D974D9C4372" }, { "b" : "7F5CA2900000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi.so.3", "elfType" : 3, "buildId" : "A1A98DB481968073636BBAECB561A3EA8ED198AE" }, { "b" : "7F5CA25D1000", "path" : "/usr/lib/x86_64-linux-gnu/libp11-kit.so.0", "elfType" : 3, "buildId" : "8DBD451EA5651283905E16FA7DFA9908688893A3" }, { "b" : "7F5CA23BE000", "path" : "/usr/lib/x86_64-linux-gnu/libtasn1.so.6", "elfType" : 3, "buildId" : "6036B89A3BB671B32E01464C0C82BFA016186352" }, { "b" : "7F5CA21BA000", "path" : "/lib/x86_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "F463E107B099910463BC32E837C73D341A52C27B" }, { "b" : "7F5CA1FB1000", "path" : "/usr/lib/x86_64-linux-gnu/libheimntlm.so.0", "elfType" : 3, "buildId" : "C2376C5B831991591F1A67B976758185F86896D8" }, { "b" : "7F5CA1D24000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.26", "elfType" : 3, "buildId" : "69BDEE5FA0FEEDF317308BE850F78761861D520A" }, { "b" : "7F5CA1A82000", "path" : "/usr/lib/x86_64-linux-gnu/libasn1.so.8", "elfType" : 3, "buildId" : "315D74995AAA32DE4D15BA25F335066988B1B230" }, { "b" : "7F5CA184C000", "path" : "/usr/lib/x86_64-linux-gnu/libhcrypto.so.4", "elfType" : 3, "buildId" : "6673972A1C24A89EBAFBAE696188A4CB26C6DDEB" }, { "b" : "7F5CA1636000", "path" : "/usr/lib/x86_64-linux-gnu/libroken.so.18", "elfType" : 3, "buildId" : "430827C33259C12248CF44B91A9A9821114376F5" }, { "b" : "7F5CA142E000", "path" : "/usr/lib/x86_64-linux-gnu/libffi.so.6", "elfType" : 3, "buildId" : "3555B5F599C9787DFDDBF9E8DF6F706B9044D985" }, { "b" : "7F5CA1205000", "path" : "/usr/lib/x86_64-linux-gnu/libwind.so.0", "elfType" : 3, "buildId" : "93A0931B1C2818F0EA224CE6FE5E31E84A9B55BB" }, { "b" : "7F5CA0FF6000", "path" : "/usr/lib/x86_64-linux-gnu/libheimbase.so.1", "elfType" : 3, "buildId" : "669D4CCE42FA4382796EFFCF0C16F459F4382C4C" }, { "b" : "7F5CA0DAC000", "path" : "/usr/lib/x86_64-linux-gnu/libhx509.so.5", "elfType" : 3, "buildId" : "4B80C543356EE0AF9039EFE7C9EA1CC1F74C426A" }, { "b" : "7F5CA0AA3000", "path" : "/usr/lib/x86_64-linux-gnu/libsqlite3.so.0", "elfType" : 3, "buildId" : "66E2050B24B18B3F95154392560A17994D1690F2" }, { "b" : "7F5CA086B000", "path" : "/lib/x86_64-linux-gnu/libcrypt.so.1", "elfType" : 3, "buildId" : "810686AF0D5FD350A4FB1CC4B5AFF44A05C102CB" } ] }} mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x5561aa1779f1] mongod(+0x28BC00C) [0x5561aa17700c] mongod(+0x28BC096) [0x5561aa177096] libpthread.so.0(+0x12890) [0x7f5ca5979890] libc.so.6(gsignal+0xC7) [0x7f5ca55b4e97] libc.so.6(abort+0x141) [0x7f5ca55b6801] mongod(_ZN5mongo42fassertFailedWithStatusNoTraceWithLocationEiRKNS_6StatusEPKcj+0x0) [0x5561a85cf3a7] mongod(+0xA4D8FD) [0x5561a83088fd] mongod(_ZN5mongo21WiredTigerRecordStore13insertRecordsEPNS_16OperationContextEPSt6vectorINS_6RecordESaIS4_EERKS3_INS_9TimestampESaIS8_EE+0x34) [0x5561a86cd324] mongod(_ZN5mongo14CollectionImpl16_insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugE+0x226) [0x5561a8e9acb6] mongod(_ZN5mongo14CollectionImpl15insertDocumentsEPNS_16OperationContextEN9__gnu_cxx17__normal_iteratorIPKNS_15InsertStatementESt6vectorIS5_SaIS5_EEEESB_PNS_7OpDebugEb+0x1EC) [0x5561a8e9ccbc] mongod(+0x14D740C) [0x5561a8d9240c] mongod(+0x14D77F0) [0x5561a8d927f0] mongod(_ZN5mongo14performInsertsEPNS_16OperationContextERKNS_9write_ops6InsertEb+0x5F2) [0x5561a8d92ea2] mongod(+0x14C9C7F) [0x5561a8d84c7f] mongod(+0x14C7DED) [0x5561a8d82ded] mongod(+0x118B8E8) [0x5561a8a468e8] mongod(+0x118DB54) [0x5561a8a48b54] mongod(_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageERKNS0_5HooksE+0x41A) [0x5561a8a498ea] mongod(_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3C) [0x5561a8a3746c] mongod(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x5561a8a431fc] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x5561a8a404df] mongod(+0x118716C) [0x5561a8a4216c] mongod(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x5561a989d432] mongod(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x5561a8a3d3ad] mongod(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x753) [0x5561a8a3ec23] mongod(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x316) [0x5561a8a3f7d6] mongod(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x5561a8a4043b] mongod(+0x118716C) [0x5561a8a4216c] mongod(+0x1FE289B) [0x5561a989d89b] mongod(+0x26381B5) [0x5561a9ef31b5] mongod(+0x2638214) [0x5561a9ef3214] libpthread.so.0(+0x76DB) [0x7f5ca596e6db] libc.so.6(clone+0x3F) [0x7f5ca569788f] ----- END BACKTRACE ----- 2020-06-06T11:02:52.632+0800 I CONTROL [main] ***** SERVER RESTARTED ***** 2020-06-06T11:02:52.635+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' 2020-06-06T11:02:52.642+0800 W ASIO [main] No TransportLayer configured during NetworkInterface startup 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] MongoDB starting : pid=14668 port=27017 dbpath=/fmApplication/mongo-copytrade3/data 64-bit host=pro-copytrade3-mongo-0-63 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] db version v4.2.7 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] git version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] allocator: tcmalloc 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] modules: none 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] build environment: 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] distmod: ubuntu1804 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] distarch: x86_64 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] target_arch: x86_64 2020-06-06T11:02:52.715+0800 I CONTROL [initandlisten] options: { config: "/fmApplication/mongo-copytrade3/conf/mongod.conf", net: { bindIp: "*", port: 27017 }, processManagement: { fork: true, pidFilePath: "/fmApplication/mongo-copytrade3/mongod.pid", timeZoneInfo: "/usr/share/zoneinfo" }, replication: { replSetName: "mongo-copytrade3" }, security: { authorization: "enabled", clusterAuthMode: "keyFile", keyFile: "/fmApplication/mongo-copytrade3/repl_set.key" }, storage: { dbPath: "/fmApplication/mongo-copytrade3/data", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/fmApplication/mongo-copytrade3/logs/mongod.log" } } 2020-06-06T11:02:52.715+0800 W STORAGE [initandlisten] Detected unclean shutdown - /fmApplication/mongo-copytrade3/data/mongod.lock is not empty. 2020-06-06T11:02:52.715+0800 I STORAGE [initandlisten] Detected data files in /fmApplication/mongo-copytrade3/data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2020-06-06T11:02:52.715+0800 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2020-06-06T11:02:52.715+0800 I STORAGE [initandlisten] 2020-06-06T11:02:52.715+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2020-06-06T11:02:52.715+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2020-06-06T11:02:52.715+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3363M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress], 2020-06-06T11:02:53.248+0800 I STORAGE [initandlisten] WiredTiger message [1591412573:247992][14668:0x7f98247e5b00], txn-recover: Recovering log 8 through 9 2020-06-06T11:02:53.308+0800 I STORAGE [initandlisten] WiredTiger message [1591412573:308045][14668:0x7f98247e5b00], txn-recover: Recovering log 9 through 9 2020-06-06T11:02:53.386+0800 I STORAGE [initandlisten] WiredTiger message [1591412573:386851][14668:0x7f98247e5b00], txn-recover: Main recovery loop: starting at 8/28316288 to 9/256 2020-06-06T11:02:53.387+0800 I STORAGE [initandlisten] WiredTiger message [1591412573:387204][14668:0x7f98247e5b00], txn-recover: Recovering log 8 through 9 2020-06-06T11:02:53.428+0800 I STORAGE [initandlisten] WiredTiger message [1591412573:428862][14668:0x7f98247e5b00], file:collection-10-2993614265812229004.wt, txn-recover: Recovering log 9 through 9 2020-06-06T11:02:53.473+0800 I STORAGE [initandlisten] WiredTiger message [1591412573:473178][14668:0x7f98247e5b00], file:collection-10-2993614265812229004.wt, txn-recover: Set global recovery timestamp: (1591412504, 1) 2020-06-06T11:02:53.488+0800 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1591412504, 1) 2020-06-06T11:02:53.492+0800 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs 2020-06-06T11:02:53.492+0800 I STORAGE [initandlisten] The size storer reports that the oplog contains 239950 records totaling to 698896165 bytes 2020-06-06T11:02:53.492+0800 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation 2020-06-06T11:02:53.492+0800 I STORAGE [initandlisten] Sampling from the oplog between Jun 6 00:11:20:1 and Jun 6 11:02:34:1 to determine where to place markers for truncation 2020-06-06T11:02:53.492+0800 I STORAGE [initandlisten] Taking 140 samples and assuming that each section of oplog contains approximately 17025 records totaling to 49588277 bytes 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 05:19:15:1924 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 05:19:18:2274 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 05:19:22:2200 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 06:01:25:601 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 06:01:30:3987 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 08:49:57:928 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 08:49:58:897 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 08:49:59:15121 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 10:58:57:1944 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 10:58:57:9400 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 10:58:59:13717 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 10:59:55:3513 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 10:59:56:1608 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] Placing a marker at optime Jun 6 10:59:57:8732 2020-06-06T11:02:53.753+0800 I STORAGE [initandlisten] WiredTiger record store oplog processing took 260ms 2020-06-06T11:02:53.755+0800 I STORAGE [initandlisten] Timestamp monitor starting 2020-06-06T11:02:53.757+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2020-06-06T11:02:53.757+0800 I CONTROL [initandlisten] 2020-06-06T11:02:53.757+0800 I CONTROL [initandlisten] 2020-06-06T11:02:53.757+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 30826 processes, 655350 files. Number of processes should be at least 327675 : 0.5 times number of files. 2020-06-06T11:02:53.762+0800 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: 2020-06-06T11:02:53.767+0800 I STORAGE [initandlisten] Flow Control is enabled on this deployment. 2020-06-06T11:02:53.767+0800 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: 2020-06-06T11:02:53.767+0800 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: 2020-06-06T11:02:53.769+0800 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: 2020-06-06T11:02:53.769+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/fmApplication/mongo-copytrade3/data/diagnostic.data' 2020-06-06T11:02:53.770+0800 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: 2020-06-06T11:02:53.770+0800 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: 2020-06-06T11:02:53.773+0800 I REPL [initandlisten] Rollback ID is 2 2020-06-06T11:02:53.775+0800 I REPL [initandlisten] Recovering from stable timestamp: Timestamp(1591412504, 1) (top of oplog: { ts: Timestamp(1591412554, 1), t: 10 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0)) 2020-06-06T11:02:53.775+0800 I REPL [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1591412504, 1) 2020-06-06T11:02:53.775+0800 I REPL [initandlisten] Replaying stored operations from Timestamp(1591412504, 1) (inclusive) to Timestamp(1591412554, 1) (inclusive). 2020-06-06T11:02:53.775+0800 I SHARDING [initandlisten] Marking collection local.oplog.rs as collection version: 2020-06-06T11:02:53.777+0800 I REPL [initandlisten] Applied 5 operations in 1 batches. Last operation applied with optime: { ts: Timestamp(1591412554, 1), t: 10 } 2020-06-06T11:02:53.777+0800 I SHARDING [initandlisten] Marking collection config.transactions as collection version: 2020-06-06T11:02:53.780+0800 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured 2020-06-06T11:02:53.780+0800 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: 2020-06-06T11:02:53.780+0800 I CONTROL [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured 2020-06-06T11:02:53.780+0800 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock 2020-06-06T11:02:53.780+0800 I NETWORK [listener] Listening on 0.0.0.0 2020-06-06T11:02:53.780+0800 I NETWORK [listener] waiting for connections on port 27017 2020-06-06T11:02:53.790+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45630 #2 (1 connection now open) 2020-06-06T11:02:53.797+0800 I NETWORK [conn2] received client metadata from 10.*****.ip:45630 conn2: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:02:53.808+0800 I ACCESS [conn2] Successfully authenticated as principal __system on local from client 10.*****.ip:45630 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] ** WARNING: This replica set has a Primary-Secondary-Arbiter architecture, but readConcern:majority is enabled 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] ** for this node. This is not a recommended configuration. Please see 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] ** https://dochub.mongodb.org/core/psa-disable-rc-majority 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] New replica set config in use: { _id: "mongo-copytrade3", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "mongo-copytrade3-pri.followme.space:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 10.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo-copytrade3-sec.followme.space:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 5.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongo-****************:27017", arbiterOnly: true, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5eda6ea849ff46fdfdcfcfdd') } } 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] This node is mongo-copytrade3-pri.followme.space:27017 in the config 2020-06-06T11:02:53.913+0800 I REPL [replexec-0] transition to STARTUP2 from STARTUP 2020-06-06T11:02:53.928+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45632 #4 (2 connections now open) 2020-06-06T11:02:53.928+0800 I REPL [replexec-0] Starting replication storage threads 2020-06-06T11:02:53.928+0800 I CONNPOOL [Replication] Connecting to mongo-copytrade3-sec.followme.space:27017 2020-06-06T11:02:53.928+0800 I CONNPOOL [Replication] Connecting to mongo-****************:27017 2020-06-06T11:02:53.928+0800 I NETWORK [conn4] received client metadata from 10.*****.ip:45632 conn4: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:02:53.930+0800 I REPL [replexec-0] transition to RECOVERING from STARTUP2 2020-06-06T11:02:53.930+0800 I REPL [replexec-0] Starting replication fetcher thread 2020-06-06T11:02:53.930+0800 I REPL [replexec-0] Starting replication applier thread 2020-06-06T11:02:53.930+0800 I REPL [replexec-0] Starting replication reporter thread 2020-06-06T11:02:53.930+0800 I REPL [rsSync-0] Starting oplog application 2020-06-06T11:02:53.930+0800 I REPL [rsSync-0] transition to SECONDARY from RECOVERING 2020-06-06T11:02:53.931+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45634 #7 (3 connections now open) 2020-06-06T11:02:53.932+0800 I REPL [rsSync-0] Resetting sync source to empty, which was :27017 2020-06-06T11:02:53.932+0800 I REPL [rsBackgroundSync] waiting for 4 pings from other members before syncing 2020-06-06T11:02:53.932+0800 I NETWORK [conn7] received client metadata from 10.*****.ip:45634 conn7: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:02:53.947+0800 I ACCESS [conn4] Successfully authenticated as principal __system on local from client 10.*****.ip:45632 2020-06-06T11:02:53.950+0800 I ACCESS [conn7] Successfully authenticated as principal __system on local from client 10.*****.ip:45634 2020-06-06T11:02:54.006+0800 I REPL [replexec-1] Member mongo-copytrade3-sec.followme.space:27017 is now in state PRIMARY 2020-06-06T11:02:54.006+0800 I ELECTION [replexec-1] Scheduling priority takeover at 2020-06-06T11:03:04.567+0800 2020-06-06T11:02:54.007+0800 I REPL [replexec-1] Member mongo-****************:27017 is now in state ARBITER 2020-06-06T11:02:54.018+0800 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK 2020-06-06T11:02:54.171+0800 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version: 2020-06-06T11:02:54.299+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:40164 #8 (4 connections now open) 2020-06-06T11:02:54.300+0800 I NETWORK [conn8] received client metadata from 10.10.*.*ip:40164 conn8: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:02:54.312+0800 I ACCESS [conn8] Successfully authenticated as principal __system on local from client 10.10.*.*ip:40164 2020-06-06T11:02:54.932+0800 I REPL [rsBackgroundSync] sync source candidate: mongo-copytrade3-sec.followme.space:27017 2020-06-06T11:02:54.932+0800 I CONNPOOL [RS] Connecting to mongo-copytrade3-sec.followme.space:27017 2020-06-06T11:02:54.946+0800 I REPL [rsBackgroundSync] Changed sync source from empty to mongo-copytrade3-sec.followme.space:27017 2020-06-06T11:02:54.948+0800 I SHARDING [rsSync-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version: 2020-06-06T11:02:57.254+0800 I NETWORK [listener] connection accepted from 172.30.0.5:61421 #11 (5 connections now open) 2020-06-06T11:02:57.255+0800 I NETWORK [conn11] received client metadata from 172.30.0.5:61421 conn11: { application: { name: "Navicat" }, driver: { name: "mongoc", version: "1.14.0" }, os: { type: "Darwin", name: "macOS", version: "19.4.0", architecture: "x86_64" }, platform: "cfg=0x00d6a0e9 posix=200112 stdc=201112 CC=clang 7.0.0 (clang-700.1.76) CFLAGS="" LDFLAGS=""" } 2020-06-06T11:03:04.567+0800 I REPL [replexec-1] Canceling priority takeover callback 2020-06-06T11:03:04.567+0800 I ELECTION [replexec-1] Starting an election for a priority takeover 2020-06-06T11:03:04.567+0800 I ELECTION [replexec-1] conducting a dry run election to see if we could be elected. current term: 11 2020-06-06T11:03:04.567+0800 I REPL [replexec-1] Scheduling remote command request for vote request: RemoteCommand 29 -- target:mongo-copytrade3-sec.followme.space:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: true, term: 11, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412576, 1), t: 11 } } 2020-06-06T11:03:04.567+0800 I REPL [replexec-1] Scheduling remote command request for vote request: RemoteCommand 30 -- target:mongo-****************:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: true, term: 11, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412576, 1), t: 11 } } 2020-06-06T11:03:04.567+0800 I ELECTION [replexec-2] VoteRequester(term 11 dry run) received a yes vote from mongo-copytrade3-sec.followme.space:27017; response message: { term: 11, voteGranted: true, reason: "", ok: 1.0, $clusterTime: { clusterTime: Timestamp(1591412576, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1591412576, 1) } 2020-06-06T11:03:04.567+0800 I ELECTION [replexec-2] dry election run succeeded, running for election in term 12 2020-06-06T11:03:04.568+0800 I CONNPOOL [Replication] Ending connection to host mongo-****************:27017 due to bad connection status: CallbackCanceled: Callback was canceled; 0 connections to that host remain open 2020-06-06T11:03:04.568+0800 I CONNPOOL [Replication] Connecting to mongo-****************:27017 2020-06-06T11:03:04.568+0800 I REPL [replexec-2] Scheduling remote command request for vote request: RemoteCommand 31 -- target:mongo-copytrade3-sec.followme.space:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: false, term: 12, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412576, 1), t: 11 } } 2020-06-06T11:03:04.568+0800 I REPL [replexec-2] Scheduling remote command request for vote request: RemoteCommand 32 -- target:mongo-****************:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: "mongo-copytrade3", dryRun: false, term: 12, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp(1591412576, 1), t: 11 } } 2020-06-06T11:03:04.570+0800 I ELECTION [replexec-0] VoteRequester(term 12) received a yes vote from mongo-copytrade3-sec.followme.space:27017; response message: { term: 12, voteGranted: true, reason: "", ok: 1.0, $clusterTime: { clusterTime: Timestamp(1591412576, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1591412576, 1) } 2020-06-06T11:03:04.570+0800 I ELECTION [replexec-3] election succeeded, assuming primary role in term 12 2020-06-06T11:03:04.570+0800 I REPL [replexec-3] transition to PRIMARY from SECONDARY 2020-06-06T11:03:04.570+0800 I REPL [replexec-3] Resetting sync source to empty, which was mongo-copytrade3-sec.followme.space:27017 2020-06-06T11:03:04.570+0800 I REPL [replexec-3] Entering primary catch-up mode. 2020-06-06T11:03:04.631+0800 I REPL [replexec-3] Member mongo-copytrade3-sec.followme.space:27017 is now in state SECONDARY 2020-06-06T11:03:04.694+0800 I REPL [replexec-4] Caught up to the latest optime known via heartbeats after becoming primary. Target optime: { ts: Timestamp(1591412576, 1), t: 11 }. My Last Applied: { ts: Timestamp(1591412576, 1), t: 11 } 2020-06-06T11:03:04.694+0800 I REPL [replexec-4] Exited primary catch-up mode. 2020-06-06T11:03:04.694+0800 I REPL [replexec-4] Stopping replication producer 2020-06-06T11:03:04.694+0800 I CONNPOOL [replexec-4] Ending connection to host mongo-copytrade3-sec.followme.space:27017 due to bad connection status: CallbackCanceled: Callback was canceled; 1 connections to that host remain open 2020-06-06T11:03:04.694+0800 I REPL [rsBackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source. 2020-06-06T11:03:04.694+0800 I REPL [ReplBatcher] Oplog buffer has been drained in term 12 2020-06-06T11:03:04.695+0800 I REPL [RstlKillOpThread] Starting to kill user operations 2020-06-06T11:03:04.695+0800 I REPL [RstlKillOpThread] Stopped killing user operations 2020-06-06T11:03:04.695+0800 I REPL [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepUp", userOpsKilled: 0, userOpsRunning: 0 } 2020-06-06T11:03:04.695+0800 I REPL [rsSync-0] transition to primary complete; database writes are now permitted 2020-06-06T11:03:06.123+0800 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to mongo-copytrade3-sec.followme.space:27017: InvalidSyncSource: Sync source was cleared. Was mongo-copytrade3-sec.followme.space:27017 2020-06-06T11:03:06.572+0800 I NETWORK [listener] connection accepted from 10.*****.ip:45638 #14 (6 connections now open) 2020-06-06T11:03:06.572+0800 I NETWORK [conn14] received client metadata from 10.*****.ip:45638 conn14: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:03:06.584+0800 I ACCESS [conn14] Successfully authenticated as principal __system on local from client 10.*****.ip:45638 2020-06-06T11:03:15.122+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:40172 #15 (7 connections now open) 2020-06-06T11:03:15.122+0800 I NETWORK [conn15] received client metadata from 10.10.*.*ip:40172 conn15: { driver: { name: "NetworkInterfaceTL", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:03:15.145+0800 I ACCESS [conn15] Successfully authenticated as principal __system on local from client 10.10.*.*ip:40172 2020-06-06T11:03:15.147+0800 I NETWORK [listener] connection accepted from 10.10.*.*ip:40174 #16 (8 connections now open) 2020-06-06T11:03:15.147+0800 I NETWORK [conn16] received client metadata from 10.10.*.*ip:40174 conn16: { driver: { name: "MongoDB Internal Client", version: "4.2.7" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "18.04" } } 2020-06-06T11:03:15.211+0800 I ACCESS [conn16] Successfully authenticated as principal __system on local from client 10.10.*.*ip:40174 2020-06-06T11:03:15.211+0800 I NETWORK [conn16] end connection 10.10.*.*ip:40174 (7 connections now open) 2020-06-06T11:04:04.694+0800 I CONNPOOL [Replication] Ending idle connection to host mongo-****************:27017 because the pool meets constraints; 1 connections to that host remain open