Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-32133

After the one relaunch of mongod the server displays an empty database

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.2.6
    • Component/s: WiredTiger
    • Labels:
    • Environment:
      Windows
    • Windows

      Once relaunch of mongod-3.2.6, the server displayed an empty database.
      We have a copy of the data after this launch (with an already empty database).
      Although the database looks empty, its size, which it occupies on the disk, has not changed.
      Further work with the database is correct relative to its empty state.

      Perhaps the previous session was not completed correctly. Its log:

      2017-11-28T10:47:42.260+0300 I CONTROL  [initandlisten] MongoDB starting : pid=552 port=14077 dbpath=Data 64-bit host=Evgenii
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] db version v3.2.6
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] allocator: tcmalloc
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] modules: none
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] build environment:
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten]     distmod: 2008plus-ssl
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten]     distarch: x86_64
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten]     target_arch: x86_64
      2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] options: { net: { port: 14077 }, storage: { dbPath: "Data", engine: "wiredTiger" }, systemLog: { destination: "file", path: "Data/Logs/logs.2017-11-28T10-47-42" } }
      2017-11-28T10:47:42.263+0300 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
      2017-11-28T10:47:42.939+0300 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
      2017-11-28T10:47:42.939+0300 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory 'Data/diagnostic.data'
      2017-11-28T10:47:42.940+0300 I NETWORK  [initandlisten] waiting for connections on port 14077
      2017-11-28T10:47:43.179+0300 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:51051 #1 (1 connection now open)
      2017-11-28T11:06:19.886+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d18f7f6488f059de0d159') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 102ms
      2017-11-28T11:06:19.886+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d18f7f6488f059de0d159') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 103ms
      2017-11-28T11:06:20.104+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d18f7f6488f059de0d159') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 113ms
      2017-11-28T11:06:20.104+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d18f7f6488f059de0d159') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 113ms
      2017-11-28T11:07:08.330+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1928f6488f059de0d15a') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 112ms
      2017-11-28T11:07:08.330+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1928f6488f059de0d15a') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 112ms
      2017-11-28T11:07:08.698+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1928f6488f059de0d15a') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 108ms
      2017-11-28T11:07:08.698+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1928f6488f059de0d15a') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 108ms
      2017-11-28T11:07:09.682+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1928f6488f059de0d15a') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 107ms
      2017-11-28T11:07:09.682+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1928f6488f059de0d15a') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 107ms
      2017-11-28T11:08:27.528+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1979f6488f059de0d15c') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 103ms
      2017-11-28T11:08:27.528+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1979f6488f059de0d15c') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 103ms
      2017-11-28T11:09:33.050+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d19b9f6488f059de0d15d') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 197ms
      2017-11-28T11:09:33.050+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d19b9f6488f059de0d15d') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 198ms
      2017-11-28T11:16:23.332+0300 I COMMAND  [conn1] terminating, shutdown command received
      2017-11-28T11:16:23.332+0300 I FTDC     [conn1] Shutting down full-time diagnostic data capture
      2017-11-28T11:16:23.334+0300 I CONTROL  [conn1] now exiting
      2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] shutdown: going to close listening sockets...
      2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] closing listening socket: 560
      2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] shutdown: going to flush diaglog...
      2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] shutdown: going to close sockets...
      2017-11-28T11:16:23.334+0300 I STORAGE  [conn1] WiredTigerKVEngine shutting down
      

      Running mongod with the --repair key did not succeed:

      C:\REPOS\LIBRARIES\mongo-cxx-1.1.2\Win64\exec>mongod.exe --dbpath "C:\Users\Zoloto\AppData\Roaming\P-Art\Database" --port 14077 --storageEngine wiredTiger --repair
      2017-12-01T14:00:16.286+0300 I CONTROL  [initandlisten] MongoDB starting : pid=15172 port=14077 dbpath=C:\Users\Zoloto\AppData\Roaming\P-Art\Database 64-bit host=ZOLOTOV_PC
      2017-12-01T14:00:16.287+0300 I CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
      2017-12-01T14:00:16.291+0300 I CONTROL  [initandlisten] db version v3.2.6
      2017-12-01T14:00:16.294+0300 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
      2017-12-01T14:00:16.294+0300 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015
      2017-12-01T14:00:16.295+0300 I CONTROL  [initandlisten] allocator: tcmalloc
      2017-12-01T14:00:16.295+0300 I CONTROL  [initandlisten] modules: none
      2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten] build environment:
      2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten]     distmod: 2008plus-ssl
      2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten]     distarch: x86_64
      2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten]     target_arch: x86_64
      2017-12-01T14:00:16.297+0300 I CONTROL  [initandlisten] options: { net: { port: 14077 }, repair: true, storage: { dbPath: "C:\Users\Zoloto\AppData\Roaming\P-Art\Database", engine: "wiredTiger" } }
      2017-12-01T14:00:16.299+0300 I STORAGE  [initandlisten] Detected WT journal files.  Running recovery from last checkpoint.
      2017-12-01T14:00:16.304+0300 I STORAGE  [initandlisten] journal to nojournal transition config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
      2017-12-01T14:00:16.354+0300 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),
      2017-12-01T14:00:16.385+0300 I STORAGE  [initandlisten] Repairing size cache
      2017-12-01T14:00:16.387+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:sizeStorer. Not salvaging.
      2017-12-01T14:00:16.390+0300 I STORAGE  [initandlisten] Repairing catalog metadata
      2017-12-01T14:00:16.394+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:_mdb_catalog. Not salvaging.
      2017-12-01T14:00:16.409+0300 I STORAGE  [initandlisten] repairDatabase case
      2017-12-01T14:00:16.409+0300 I STORAGE  [initandlisten] Repairing collection case.settings
      2017-12-01T14:00:16.414+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:collection-2--1109448600568661957. Not salvaging.
      2017-12-01T14:00:16.430+0300 I INDEX    [initandlisten] build index on: case.settings properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "case.settings" }
      2017-12-01T14:00:16.430+0300 I INDEX    [initandlisten]          building index using bulk method
      2017-12-01T14:00:16.446+0300 I STORAGE  [initandlisten] repairDatabase local
      2017-12-01T14:00:16.446+0300 I STORAGE  [initandlisten] Repairing collection local.startup_log
      2017-12-01T14:00:16.451+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:collection-0--1109448600568661957. Not salvaging.
      2017-12-01T14:00:16.472+0300 I INDEX    [initandlisten] build index on: local.startup_log properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }
      2017-12-01T14:00:16.472+0300 I INDEX    [initandlisten]          building index using bulk method
      2017-12-01T14:00:16.491+0300 I STORAGE  [initandlisten] finished checking dbs
      2017-12-01T14:00:16.491+0300 I CONTROL  [initandlisten] now exiting
      2017-12-01T14:00:16.492+0300 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
      2017-12-01T14:00:16.495+0300 I NETWORK  [initandlisten] shutdown: going to flush diaglog...
      2017-12-01T14:00:16.495+0300 I NETWORK  [initandlisten] shutdown: going to close sockets...
      2017-12-01T14:00:16.504+0300 I STORAGE  [initandlisten] WiredTigerKVEngine shutting down
      2017-12-01T14:00:16.530+0300 I STORAGE  [initandlisten] shutdown: removing fs lock...
      2017-12-01T14:00:16.531+0300 I CONTROL  [initandlisten] dbexit:  rc: 0
      

      To restore the data, we built WiredTiger-2.9.3 (with Snappy-1.1.1.8).
      We took advice and tried to run wt with the salvage key for each * .wt file.
      This also did not succeed:

      ***** collection-0--1109448600568661957.wt *****
      
      
      
      ***** collection-0--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-0--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-0--7728738330246642581.wt *****
      wt.exe: session.salvage: file:collection-0--7728738330246642581.wt: No such file or directory
      
      
      ***** collection-0-1204684021656353242.wt *****
      wt.exe: session.salvage: file:collection-0-1204684021656353242.wt: No such file or directory
      
      
      ***** collection-0-160758265150124333.wt *****
      wt.exe: session.salvage: file:collection-0-160758265150124333.wt: No such file or directory
      
      
      ***** collection-0-2369900529422110509.wt *****
      wt.exe: session.salvage: file:collection-0-2369900529422110509.wt: No such file or directory
      
      
      ***** collection-0-8225139244052626311.wt *****
      wt.exe: session.salvage: file:collection-0-8225139244052626311.wt: No such file or directory
      
      
      ***** collection-11--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-11--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-14--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-14--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-16--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-16--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-18--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-18--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-2--1109448600568661957.wt *****
      
      
      
      ***** collection-2--4563383296172894413.wt *****
      
      
      
      ***** collection-20--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-20--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-22--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-22--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-4--4563383296172894413.wt *****
              WT_SESSION.salvage 1900
      
      
      ***** collection-6--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-6--4563383296172894413.wt: No such file or directory
      
      
      ***** collection-8--4563383296172894413.wt *****
      wt.exe: session.salvage: file:collection-8--4563383296172894413.wt: No such file or directory
      
      
      ***** index-0-3763645043796422516.wt *****
      
      
      
      ***** index-1--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-1--4563383296172894413.wt: No such file or directory
      
      
      ***** index-1--7728738330246642581.wt *****
      wt.exe: session.salvage: file:index-1--7728738330246642581.wt: No such file or directory
      
      
      ***** index-1-1204684021656353242.wt *****
      wt.exe: session.salvage: file:index-1-1204684021656353242.wt: No such file or directory
      
      
      ***** index-1-160758265150124333.wt *****
      wt.exe: session.salvage: file:index-1-160758265150124333.wt: No such file or directory
      
      
      ***** index-1-2369900529422110509.wt *****
      wt.exe: session.salvage: file:index-1-2369900529422110509.wt: No such file or directory
      
      
      ***** index-1-3763645043796422516.wt *****
      
      
      
      ***** index-1-8225139244052626311.wt *****
      wt.exe: session.salvage: file:index-1-8225139244052626311.wt: No such file or directory
      
      
      ***** index-10--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-10--4563383296172894413.wt: No such file or directory
      
      
      ***** index-12--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-12--4563383296172894413.wt: No such file or directory
      
      
      ***** index-13--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-13--4563383296172894413.wt: No such file or directory
      
      
      ***** index-15--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-15--4563383296172894413.wt: No such file or directory
      
      
      ***** index-17--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-17--4563383296172894413.wt: No such file or directory
      
      
      ***** index-19--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-19--4563383296172894413.wt: No such file or directory
      
      
      ***** index-21--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-21--4563383296172894413.wt: No such file or directory
      
      
      ***** index-23--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-23--4563383296172894413.wt: No such file or directory
      
      
      ***** index-3--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-3--4563383296172894413.wt: No such file or directory
      
      
      ***** index-5--4563383296172894413.wt *****
      
      
      
      ***** index-7--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-7--4563383296172894413.wt: No such file or directory
      
      
      ***** index-9--4563383296172894413.wt *****
      wt.exe: session.salvage: file:index-9--4563383296172894413.wt: No such file or directory
      
      
      ***** sizeStorer.wt *****
      
      
      
      ***** WiredTiger.wt *****
      wt.exe: session.salvage: file:WiredTiger.wt: Resource device
      
      
      ***** WiredTigerLAS.wt *****
      wt.exe: session.salvage: file:WiredTigerLAS.wt: Resource device
      
      
      ***** _mdb_catalog.wt *****
      
      

      Is it any way to restore the data?
      It seems that only some journal was corrupted, and the data itself remained untouched.

      We can send a copy of the data, but we would not like to publish the data publicly.

        1. logs.2017-11-28T10-47-42
          9 kB
        2. logs.rar
          18 kB
        3. mongo-repair_trace
          4 kB
        4. repair-SERVER-32133.tar.gz
          2 kB
        5. WiredTiger.turtle
          0.9 kB
        6. WiredTiger.wt
          52 kB
        7. wt-salvage_trace
          5 kB

            Assignee:
            mark.agarunov Mark Agarunov
            Reporter:
            Azriel Zolotov Pavel
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: