Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-74345

mongodb-org-server 4.4.19, 5.0.15, 6.0.5 not starting after upgrading from older version (Debian, RPM Packages)

    • Type: Icon: Bug Bug
    • Resolution: Fixed
    • Priority: Icon: Critical - P2 Critical - P2
    • 7.0.0-rc0, 4.4.20, 5.0.16, 6.0.6
    • Affects Version/s: 4.4.19, 5.0.15, 6.0.5
    • Component/s: None
    • Labels:
      None
    • Environment:
      Fedora 37, x64 (libvirt VM) & arm64 (Parallels VM)
    • Fully Compatible
    • ALL
      • Install mongodb 5.0.15 via RPM on Fedora/RHEL/CentOS
      • Try to start with `systemctl start mongod`
      • Systemd unit will fail to start

      Issue Status as of Mar 2, 2023

      ISSUE DESCRIPTION AND AFFECTED VERSIONS
      This issue in MongoDB 4.4.19, 5.0.15, 6.0.5 causes existing installations of MongoDB using Debian and RPM packages to fail to start up after upgrading to one of these versions, if "fork: true" is set in their "mongod.conf" file. Please note that this issue does not affect users who installed MongoDB using the tarball (tgz) release.

      The issue is caused by backporting SERVER-38232 to the affected versions. The changes interact unexpectedly with services installed by Debian and RPM packages, requiring users to modify the configuration to align with these changes.

      SERVER-74845 fixes the issue.

      IMPACT
      For users who have already installed MongoDB using Debian and RPM packages and have upgraded to MongoDB versions 4.4.19, 5.0.15, or 6.0.5, the primary impact of this issue is that mongod is unable to start up.

      Here is an example error message indicating that mongod is killed by the operating system during the init process:

      {"t":{"$date":"2023-02-24T12:23:00.000+01:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":1,"uid":0}}

      We have reverted the incompatible systemd changes in versions 4.4.20, 5.0.16, and 6.0.6. To stay updated on the progress of the change reversion, please track SERVER-74506 and be sure to review the release notes before upgrading.

      REMEDIATION
      The recommended solution to remediate this is to remove the "fork:true" setting from the MongoDB configuration file (i.e. "/etc/mongod.conf"). To do so, execute the following commands (as root if necessary):

      systemctl stop mongod.service
      sed -i.bak '/fork: true/d' /etc/mongod.conf
      systemctl start mongod.service 

      For users who have encountered the issues and implemented the remediation mentioned above, please note that there may be required actions for the user to perform after upgrading to versions that include the reversion. It is crucial to carefully review the release notes before proceeding with an upgrade.

       

      Original Content

      Hello, we are using the RPM repository for RHEL 8 (https://repo.mongodb.org/yum/redhat/8/mongodb-org/5.0/$basearch/) to install the mongodb-org-server package in our development environments running Fedora 37. However starting with the release of 5.0.15 today, these boxes were failing because the mongod server could not be started, happening both for arm64 & x64 CPUs. A rollback to 5.0.14 fixed the issue immediately.

       
      Our config is very basic and has been working since 3.6:

      net:
         bindIp: 0.0.0.0
         port: 27017
      processManagement:
         fork: true
         pidFilePath: "/var/run/mongodb/mongod.pid"
      systemLog:
         destination: file
         path: "/var/log/mongodb/mongod.log"
         logAppend: true
      storage:
         dbPath: "/var/lib/mongo"
         journal:
            enabled: true
      

      Journalctl output looks like this:

      Feb 24 08:33:43 hostname systemd[1]: Started mongod.service - MongoDB Database Server.
      Feb 24 08:33:43 hostname mongod[12829]: about to fork child process, waiting until server is ready for connections.
      Feb 24 08:33:43 hostname mongod[12831]: forked process: 12831
      Feb 24 08:33:44 hostname mongod[12829]: child process started successfully, parent exiting
      Feb 24 08:33:44 hostname systemd[1]: mongod.service: Deactivated successfully.
      Feb 24 08:33:44 hostname systemd[1]: Started mongod.service - MongoDB Database Server.
      Feb 24 08:33:44 hostname mongod[12874]: about to fork child process, waiting until server is ready for connections.
      Feb 24 08:33:44 hostname mongod[12876]: forked process: 12876
      Feb 24 08:33:45 hostname mongod[12874]: child process started successfully, parent exiting
      Feb 24 08:33:45 hostname systemd[1]: mongod.service: Deactivated successfully.
      Feb 24 08:33:45 hostname systemd[1]: Started mongod.service - MongoDB Database Server.
      Feb 24 08:33:45 hostname mongod[12945]: about to fork child process, waiting until server is ready for connections.
      Feb 24 08:33:45 hostname mongod[12947]: forked process: 12947
      Feb 24 08:33:46 hostname mongod[12945]: child process started successfully, parent exiting
      Feb 24 08:33:46 hostname systemd[1]: mongod.service: Deactivated successfully.
      Feb 24 08:33:46 hostname systemd[1]: Started mongod.service - MongoDB Database Server.
      Feb 24 08:33:46 hostname mongod[12995]: about to fork child process, waiting until server is ready for connections.
      Feb 24 08:33:46 hostname mongod[12997]: forked process: 12997
      Feb 24 08:33:47 hostname mongod[12995]: child process started successfully, parent exiting
      Feb 24 08:33:47 hostname systemd[1]: mongod.service: Deactivated successfully.
      

      And this is a typical restart loop in the mongod logs:

      {"t":{"$date":"2023-02-24T10:29:59.585+01:00"},"s":"I",  "c":"NETWORK",  "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
      {"t":{"$date":"2023-02-24T10:29:59.585+01:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
      {"t":{"$date":"2023-02-24T10:29:59.586+01:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
      {"t":{"$date":"2023-02-24T10:29:59.586+01:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"REPL",     "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"CONTROL",  "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"CONTROL",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":12347,"port":27017,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"philipp-devbox"}}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.15","gitVersion":"935639beed3d0c19c2551c93854b831107c0b118","openSSLVersion":"OpenSSL 1.1.1q  FIPS 5 Jul 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"rhel80","distarch":"x86_64","target_arch":"x86_64"}}}}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Fedora release 37 (Thirty Seven)","version":"Kernel 6.1.11-200.fc37.x86_64"}}}
      {"t":{"$date":"2023-02-24T10:29:59.587+01:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/etc/mongod.conf","net":{"bindIp":"0.0.0.0","port":27017},"processManagement":{"fork":true,"pidFilePath":"/run/mongodb/mongod.pid"},"storage":{"dbPath":"/var/lib/mongo","journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"path":"/var/log/mongodb/mongod.log"}}}}
      {"t":{"$date":"2023-02-24T10:29:59.588+01:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/var/lib/mongo","storageEngine":"wiredTiger"}}
      {"t":{"$date":"2023-02-24T10:29:59.588+01:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
      {"t":{"$date":"2023-02-24T10:29:59.588+01:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=7485M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
      {"t":{"$date":"2023-02-24T10:29:59.701+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:701444][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 15 through 16"}}
      {"t":{"$date":"2023-02-24T10:29:59.718+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:718142][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 16 through 16"}}
      {"t":{"$date":"2023-02-24T10:29:59.748+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:748663][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 15/5504 to 16/256"}}
      {"t":{"$date":"2023-02-24T10:29:59.781+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:781829][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 15 through 16"}}
      {"t":{"$date":"2023-02-24T10:29:59.807+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:807176][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 16 through 16"}}
      {"t":{"$date":"2023-02-24T10:29:59.824+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:824406][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)"}}
      {"t":{"$date":"2023-02-24T10:29:59.824+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:824426][12347:0x7f14b0409b40], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)"}}
      {"t":{"$date":"2023-02-24T10:29:59.828+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1677230999:828185][12347:0x7f14b0409b40], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 141"}}
      {"t":{"$date":"2023-02-24T10:29:59.842+01:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":254}}
      {"t":{"$date":"2023-02-24T10:29:59.842+01:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
      {"t":{"$date":"2023-02-24T10:29:59.844+01:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
      {"t":{"$date":"2023-02-24T10:29:59.851+01:00"},"s":"W",  "c":"CONTROL",  "id":22120,   "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
      {"t":{"$date":"2023-02-24T10:29:59.853+01:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}}
      {"t":{"$date":"2023-02-24T10:29:59.853+01:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
      {"t":{"$date":"2023-02-24T10:29:59.854+01:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
      {"t":{"$date":"2023-02-24T10:29:59.854+01:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/var/lib/mongo/diagnostic.data"}}
      {"t":{"$date":"2023-02-24T10:29:59.855+01:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
      {"t":{"$date":"2023-02-24T10:29:59.856+01:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
      {"t":{"$date":"2023-02-24T10:29:59.856+01:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
      {"t":{"$date":"2023-02-24T10:29:59.856+01:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"CONTROL",  "id":23377,   "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"CONTROL",  "id":23378,   "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":1,"uid":0}}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"CONTROL",  "id":23381,   "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"REPL",     "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"REPL",     "id":4794602, "ctx":"SignalHandler","msg":"Attempting to enter quiesce mode"}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"COMMAND",  "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"CONTROL",  "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"NETWORK",  "id":20562,   "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
      {"t":{"$date":"2023-02-24T10:29:59.858+01:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":3}}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"TENANT_M", "id":5093807, "ctx":"SignalHandler","msg":"Shutting down all TenantMigrationAccessBlockers on global shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"REPL",     "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"REPL",     "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"COMMAND",  "id":4784923, "ctx":"SignalHandler","msg":"Shutting down the ServiceEntryPoint"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784925, "ctx":"SignalHandler","msg":"Shutting down free monitoring"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":20609,   "ctx":"SignalHandler","msg":"Shutting down free monitoring"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"INDEX",    "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"INDEX",    "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"CONTROL",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
      {"t":{"$date":"2023-02-24T10:29:59.859+01:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
      {"t":{"$date":"2023-02-24T10:29:59.861+01:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
      {"t":{"$date":"2023-02-24T10:29:59.861+01:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1677230999:861901][12347:0x7f14b04086c0], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 141"}}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":26}}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
      {"t":{"$date":"2023-02-24T10:29:59.887+01:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
      

      I hope somebody can find the culprit!

            Assignee:
            chris.kelly@mongodb.com Chris Kelly
            Reporter:
            philipp@trulson.de Philipp Trulson
            Votes:
            3 Vote for this issue
            Watchers:
            31 Start watching this issue

              Created:
              Updated:
              Resolved: