Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-42050

problem upgrading from 3.6.10 to 4.0.9

    • Type: Icon: Bug Bug
    • Resolution: Incomplete
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Admin
    • None
    • Environment:
      centos 7.4
    • ALL

      I was trying to upgrade a mongo replica set from version 3.2.12 to 4.0.9. i managed to upgrade all the server one at a time to 3.6.10 and had problems upgrading most of them to 4.0.9 .
      the procedure in which i upgrade to 3.6.10 was to upgrade each secondary , after which failover. Upgrade the final server and then changed compatibility level . Between each upgrade I made sure that all the replicas were working find and checked that all compatibility level was correct after each upgrade.
      while trying to upgrade the second server in the replica i got an error message :

      2019-07-02T13:46:15.104+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
      2019-07-02T13:46:15.104+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
      2019-07-02T13:46:15.104+0000 I CONTROL [initandlisten]
      2019-07-02T13:46:15.104+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
      2019-07-02T13:46:15.104+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
      2019-07-02T13:46:15.104+0000 I CONTROL [initandlisten]
      2019-07-02T13:46:15.301+0000 F CONTROL [initandlisten] ** IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.6 before attempting an upgrade to 4.0; see http://dochub.mongodb.org/core/4.0-upgrade-fcv for mor
      e details.
      2019-07-02T13:46:15.312+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...
      2019-07-02T13:46:15.312+0000 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock
      2019-07-02T13:46:15.312+0000 I REPL [initandlisten] shutting down replication subsystems
      2019-07-02T13:46:15.312+0000 W REPL [initandlisten] ReplicationCoordinatorImpl::shutdown() called before startup() finished. Shutting down without cleaning up the replication system
      2019-07-02T13:46:15.312+0000 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down
      2019-07-02T13:46:15.313+0000 I STORAGE [initandlisten] WiredTigerKVEngine shutting down
      2019-07-02T13:46:15.314+0000 I STORAGE [initandlisten] Shutting down session sweeper thread
      2019-07-02T13:46:15.314+0000 I STORAGE [initandlisten] Finished shutting down session sweeper thread
      2019-07-02T13:46:15.329+0000 I STORAGE [initandlisten] Downgrading WiredTiger datafiles.
      2019-07-02T13:46:15.473+0000 I STORAGE [initandlisten] WiredTiger message [1562075175:473466][4603:0x7fe667aadb80], txn-recover: Main recovery loop: starting at 38514/118528 to 38515/256
      2019-07-02T13:46:15.566+0000 I STORAGE [initandlisten] WiredTiger message [1562075175:566631][4603:0x7fe667aadb80], txn-recover: Recovering log 38514 through 38515
      2019-07-02T13:46:15.622+0000 I STORAGE [initandlisten] WiredTiger message [1562075175:622052][4603:0x7fe667aadb80], txn-recover: Recovering log 38515 through 38515
      2019-07-02T13:46:15.668+0000 I STORAGE [initandlisten] WiredTiger message [1562075175:668505][4603:0x7fe667aadb80], txn-recover: Set global recovery timestamp: 0
      2019-07-02T13:46:15.910+0000 I STORAGE [initandlisten] shutdown: removing fs lock...
      2019-07-02T13:46:15.910+0000 I CONTROL [initandlisten] now exiting
      2019-07-02T13:46:15.910+0000 I CONTROL [initandlisten] shutting down with code:62
      2019-07-02T13:46:15.942+0000 I CONTROL [main] ***** SERVER RESTARTED *****

      i though that the problem was in syncing so I downgraded it to 3.6.10 and made sure that there was no replication lag but when I tried to reinstall 4.0.9 i got the same error. 

      I tried to start the mongo with --repair but it failed twice so my only option is to run inital sync on those problematic servers, one at a time . 

      I would for some help 
      thanks

            Assignee:
            daniel.hatcher@mongodb.com Danny Hatcher (Inactive)
            Reporter:
            lioral Lior Altarescu
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: