[SERVER-32133] After the one relaunch of mongod the server displays an empty database Created: 01/Dec/17  Updated: 27/Jul/18  Resolved: 05/Dec/17

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 3.2.6
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Zolotov Pavel Assignee: Mark Agarunov
Resolution: Done Votes: 0
Labels: envh, rns, rpu, trcf, wtc
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Windows


Attachments: File WiredTiger.turtle     File WiredTiger.wt     File logs.2017-11-28T10-47-42     File logs.rar     HTML File mongo-repair_trace     File repair-SERVER-32133.tar.gz     HTML File wt-salvage_trace    
Operating System: Windows
Participants:

 Description   

Once relaunch of mongod-3.2.6, the server displayed an empty database.
We have a copy of the data after this launch (with an already empty database).
Although the database looks empty, its size, which it occupies on the disk, has not changed.
Further work with the database is correct relative to its empty state.

Perhaps the previous session was not completed correctly. Its log:

2017-11-28T10:47:42.260+0300 I CONTROL  [initandlisten] MongoDB starting : pid=552 port=14077 dbpath=Data 64-bit host=Evgenii
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] db version v3.2.6
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] allocator: tcmalloc
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] modules: none
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] build environment:
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten]     distmod: 2008plus-ssl
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten]     distarch: x86_64
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten]     target_arch: x86_64
2017-11-28T10:47:42.261+0300 I CONTROL  [initandlisten] options: { net: { port: 14077 }, storage: { dbPath: "Data", engine: "wiredTiger" }, systemLog: { destination: "file", path: "Data/Logs/logs.2017-11-28T10-47-42" } }
2017-11-28T10:47:42.263+0300 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-11-28T10:47:42.939+0300 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-11-28T10:47:42.939+0300 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory 'Data/diagnostic.data'
2017-11-28T10:47:42.940+0300 I NETWORK  [initandlisten] waiting for connections on port 14077
2017-11-28T10:47:43.179+0300 I NETWORK  [initandlisten] connection accepted from 127.0.0.1:51051 #1 (1 connection now open)
2017-11-28T11:06:19.886+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d18f7f6488f059de0d159') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 102ms
2017-11-28T11:06:19.886+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d18f7f6488f059de0d159') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 103ms
2017-11-28T11:06:20.104+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d18f7f6488f059de0d159') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 113ms
2017-11-28T11:06:20.104+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d18f7f6488f059de0d159') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 113ms
2017-11-28T11:07:08.330+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1928f6488f059de0d15a') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 112ms
2017-11-28T11:07:08.330+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1928f6488f059de0d15a') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 112ms
2017-11-28T11:07:08.698+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1928f6488f059de0d15a') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 108ms
2017-11-28T11:07:08.698+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1928f6488f059de0d15a') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 108ms
2017-11-28T11:07:09.682+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1928f6488f059de0d15a') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 107ms
2017-11-28T11:07:09.682+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1928f6488f059de0d15a') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 107ms
2017-11-28T11:08:27.528+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d1979f6488f059de0d15c') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 103ms
2017-11-28T11:08:27.528+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d1979f6488f059de0d15c') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 103ms
2017-11-28T11:09:33.050+0300 I WRITE    [conn1] update case.axio query: { _id: ObjectId('5a1d19b9f6488f059de0d15d') } update: { $push: { data.record: { $each: [] } } } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:0 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 197ms
2017-11-28T11:09:33.050+0300 I COMMAND  [conn1] command case.$cmd command: update { update: "axio", updates: [ { q: { _id: ObjectId('5a1d19b9f6488f059de0d15d') }, u: { $push: { data.record: { $each: [] } } }, multi: false, upsert: false } ], ordered: true, writeConcern: {} } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 198ms
2017-11-28T11:16:23.332+0300 I COMMAND  [conn1] terminating, shutdown command received
2017-11-28T11:16:23.332+0300 I FTDC     [conn1] Shutting down full-time diagnostic data capture
2017-11-28T11:16:23.334+0300 I CONTROL  [conn1] now exiting
2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] shutdown: going to close listening sockets...
2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] closing listening socket: 560
2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] shutdown: going to flush diaglog...
2017-11-28T11:16:23.334+0300 I NETWORK  [conn1] shutdown: going to close sockets...
2017-11-28T11:16:23.334+0300 I STORAGE  [conn1] WiredTigerKVEngine shutting down

Running mongod with the --repair key did not succeed:

C:\REPOS\LIBRARIES\mongo-cxx-1.1.2\Win64\exec>mongod.exe --dbpath "C:\Users\Zoloto\AppData\Roaming\P-Art\Database" --port 14077 --storageEngine wiredTiger --repair
2017-12-01T14:00:16.286+0300 I CONTROL  [initandlisten] MongoDB starting : pid=15172 port=14077 dbpath=C:\Users\Zoloto\AppData\Roaming\P-Art\Database 64-bit host=ZOLOTOV_PC
2017-12-01T14:00:16.287+0300 I CONTROL  [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2017-12-01T14:00:16.291+0300 I CONTROL  [initandlisten] db version v3.2.6
2017-12-01T14:00:16.294+0300 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
2017-12-01T14:00:16.294+0300 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015
2017-12-01T14:00:16.295+0300 I CONTROL  [initandlisten] allocator: tcmalloc
2017-12-01T14:00:16.295+0300 I CONTROL  [initandlisten] modules: none
2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten] build environment:
2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten]     distmod: 2008plus-ssl
2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten]     distarch: x86_64
2017-12-01T14:00:16.296+0300 I CONTROL  [initandlisten]     target_arch: x86_64
2017-12-01T14:00:16.297+0300 I CONTROL  [initandlisten] options: { net: { port: 14077 }, repair: true, storage: { dbPath: "C:\Users\Zoloto\AppData\Roaming\P-Art\Database", engine: "wiredTiger" } }
2017-12-01T14:00:16.299+0300 I STORAGE  [initandlisten] Detected WT journal files.  Running recovery from last checkpoint.
2017-12-01T14:00:16.304+0300 I STORAGE  [initandlisten] journal to nojournal transition config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-12-01T14:00:16.354+0300 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false),
2017-12-01T14:00:16.385+0300 I STORAGE  [initandlisten] Repairing size cache
2017-12-01T14:00:16.387+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:sizeStorer. Not salvaging.
2017-12-01T14:00:16.390+0300 I STORAGE  [initandlisten] Repairing catalog metadata
2017-12-01T14:00:16.394+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:_mdb_catalog. Not salvaging.
2017-12-01T14:00:16.409+0300 I STORAGE  [initandlisten] repairDatabase case
2017-12-01T14:00:16.409+0300 I STORAGE  [initandlisten] Repairing collection case.settings
2017-12-01T14:00:16.414+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:collection-2--1109448600568661957. Not salvaging.
2017-12-01T14:00:16.430+0300 I INDEX    [initandlisten] build index on: case.settings properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "case.settings" }
2017-12-01T14:00:16.430+0300 I INDEX    [initandlisten]          building index using bulk method
2017-12-01T14:00:16.446+0300 I STORAGE  [initandlisten] repairDatabase local
2017-12-01T14:00:16.446+0300 I STORAGE  [initandlisten] Repairing collection local.startup_log
2017-12-01T14:00:16.451+0300 I STORAGE  [initandlisten] Verify succeeded on uri table:collection-0--1109448600568661957. Not salvaging.
2017-12-01T14:00:16.472+0300 I INDEX    [initandlisten] build index on: local.startup_log properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }
2017-12-01T14:00:16.472+0300 I INDEX    [initandlisten]          building index using bulk method
2017-12-01T14:00:16.491+0300 I STORAGE  [initandlisten] finished checking dbs
2017-12-01T14:00:16.491+0300 I CONTROL  [initandlisten] now exiting
2017-12-01T14:00:16.492+0300 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
2017-12-01T14:00:16.495+0300 I NETWORK  [initandlisten] shutdown: going to flush diaglog...
2017-12-01T14:00:16.495+0300 I NETWORK  [initandlisten] shutdown: going to close sockets...
2017-12-01T14:00:16.504+0300 I STORAGE  [initandlisten] WiredTigerKVEngine shutting down
2017-12-01T14:00:16.530+0300 I STORAGE  [initandlisten] shutdown: removing fs lock...
2017-12-01T14:00:16.531+0300 I CONTROL  [initandlisten] dbexit:  rc: 0

To restore the data, we built WiredTiger-2.9.3 (with Snappy-1.1.1.8).
We took advice and tried to run wt with the salvage key for each * .wt file.
This also did not succeed:

 
***** collection-0--1109448600568661957.wt *****
 
 
 
***** collection-0--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-0--4563383296172894413.wt: No such file or directory
 
 
***** collection-0--7728738330246642581.wt *****
wt.exe: session.salvage: file:collection-0--7728738330246642581.wt: No such file or directory
 
 
***** collection-0-1204684021656353242.wt *****
wt.exe: session.salvage: file:collection-0-1204684021656353242.wt: No such file or directory
 
 
***** collection-0-160758265150124333.wt *****
wt.exe: session.salvage: file:collection-0-160758265150124333.wt: No such file or directory
 
 
***** collection-0-2369900529422110509.wt *****
wt.exe: session.salvage: file:collection-0-2369900529422110509.wt: No such file or directory
 
 
***** collection-0-8225139244052626311.wt *****
wt.exe: session.salvage: file:collection-0-8225139244052626311.wt: No such file or directory
 
 
***** collection-11--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-11--4563383296172894413.wt: No such file or directory
 
 
***** collection-14--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-14--4563383296172894413.wt: No such file or directory
 
 
***** collection-16--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-16--4563383296172894413.wt: No such file or directory
 
 
***** collection-18--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-18--4563383296172894413.wt: No such file or directory
 
 
***** collection-2--1109448600568661957.wt *****
 
 
 
***** collection-2--4563383296172894413.wt *****
 
 
 
***** collection-20--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-20--4563383296172894413.wt: No such file or directory
 
 
***** collection-22--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-22--4563383296172894413.wt: No such file or directory
 
 
***** collection-4--4563383296172894413.wt *****
        WT_SESSION.salvage 1900
 
 
***** collection-6--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-6--4563383296172894413.wt: No such file or directory
 
 
***** collection-8--4563383296172894413.wt *****
wt.exe: session.salvage: file:collection-8--4563383296172894413.wt: No such file or directory
 
 
***** index-0-3763645043796422516.wt *****
 
 
 
***** index-1--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-1--4563383296172894413.wt: No such file or directory
 
 
***** index-1--7728738330246642581.wt *****
wt.exe: session.salvage: file:index-1--7728738330246642581.wt: No such file or directory
 
 
***** index-1-1204684021656353242.wt *****
wt.exe: session.salvage: file:index-1-1204684021656353242.wt: No such file or directory
 
 
***** index-1-160758265150124333.wt *****
wt.exe: session.salvage: file:index-1-160758265150124333.wt: No such file or directory
 
 
***** index-1-2369900529422110509.wt *****
wt.exe: session.salvage: file:index-1-2369900529422110509.wt: No such file or directory
 
 
***** index-1-3763645043796422516.wt *****
 
 
 
***** index-1-8225139244052626311.wt *****
wt.exe: session.salvage: file:index-1-8225139244052626311.wt: No such file or directory
 
 
***** index-10--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-10--4563383296172894413.wt: No such file or directory
 
 
***** index-12--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-12--4563383296172894413.wt: No such file or directory
 
 
***** index-13--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-13--4563383296172894413.wt: No such file or directory
 
 
***** index-15--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-15--4563383296172894413.wt: No such file or directory
 
 
***** index-17--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-17--4563383296172894413.wt: No such file or directory
 
 
***** index-19--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-19--4563383296172894413.wt: No such file or directory
 
 
***** index-21--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-21--4563383296172894413.wt: No such file or directory
 
 
***** index-23--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-23--4563383296172894413.wt: No such file or directory
 
 
***** index-3--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-3--4563383296172894413.wt: No such file or directory
 
 
***** index-5--4563383296172894413.wt *****
 
 
 
***** index-7--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-7--4563383296172894413.wt: No such file or directory
 
 
***** index-9--4563383296172894413.wt *****
wt.exe: session.salvage: file:index-9--4563383296172894413.wt: No such file or directory
 
 
***** sizeStorer.wt *****
 
 
 
***** WiredTiger.wt *****
wt.exe: session.salvage: file:WiredTiger.wt: Resource device
 
 
***** WiredTigerLAS.wt *****
wt.exe: session.salvage: file:WiredTigerLAS.wt: Resource device
 
 
***** _mdb_catalog.wt *****

Is it any way to restore the data?
It seems that only some journal was corrupted, and the data itself remained untouched.

We can send a copy of the data, but we would not like to publish the data publicly.



 Comments   
Comment by Mark Agarunov [ 05/Dec/17 ]

Hello Azriel,

Thank you for the additional information. Looking over the logs and your responses, unfortunately I believe the most likely cause for this is underlying corruption on this disk which cannot be recovered from. In this situation, my best recommendation would be to resync the affected node or restore from a backup if possible.

Thanks,
Mark

Comment by Zolotov Pavel [ 05/Dec/17 ]

Hello, Mark Agarunov!

Thanks for the work you've done.

Unfortunately, the replacement of files did not restore at least one collection.
We tried both version 3.2.6 and version 3.4.10.

We use mongod as the local database for the desktop application, which runs mongod at startup and completes it at its completion.
In this case, we used an usual PC with Windows 8.1, with a local HDD without RAID.
We will definitely check the disk. We have not done this before.

This database was launched with this version of MongoDB from the time it was released.
Earlier versions of MongoDB were used.
Some earlier versions used the MMAPv1 Storage Engine.
Then there was a transition to WiredTiger (just over a year ago).
For the transition, the desktop application tools were used, so for mongod, this was as creating a new database.

The base database files were not copied or moved.
After detecting a database failure, we made a copy of the dbpath directory.
At the time the copy was created, mongod was already run several times (via the application desktop) with a failure (empty) version of the data.
Unfortunately, we do not have an earlier copy of the data after the failure.

Unfortunately, the mechanism for creating backup copies has not yet been implemented.

Just in case, I once again tell the sequence of actions that led to the error:
1. Our desktop application successfully worked with the database (starting and completing the mongod).
2. Once the application was closed.
3. The next time the application was launched, there was no data.
The lack of data has been verified by various MongoDB clients. For example, mongo.exe, Robomongo.
Between points 2 and 3, nothing happened to the computer and dbpath directory.

We attach several logs, close to the crash. logs.rar
The log '2018-11-28T11: 41: 38.604 + 0300' has an inset in the 'settings' collection.
Due to the rules of operation of our desktop applications, we think that this means that the data is already empty.
Therefore, the session in which error occured should be written to the previous log, which is 'logs.2017-11-28T10-47-42', we think.

Thanks, Zolotov Pavel.

Comment by Mark Agarunov [ 04/Dec/17 ]

Hello Azriel,

Thank you for the report. I've attached a repair attempt of the files you've provided. Would you please extract these files and replace them in your $dbpath and let us know if it resolves the issue? If you are still seeing errors after replacing these files, please provide the complete logs from mongod so that we can further investigate. Additionally, if this issue persists, please provide the following information:

  1. What kind of underlying storage mechanism are you using? Are the storage devices attached locally or over the network? Are the disks SSDs or HDDs? What kind of RAID and/or volume management system are you using?
  2. Would you please check the integrity of your disks?
  3. Has the database always been running this version of MongoDB? If not please describe the upgrade/downgrade cycles the database has been through.
  4. Have you manipulated (copied or moved) the underlying database files? If so, was mongod running?
  5. Have you ever restored this instance from backups?
  6. What method do you use to create backups?
  7. When was the underlying filesystem last checked and is it currently marked clean?

Thanks,
Mark

Comment by Zolotov Pavel [ 04/Dec/17 ]

Hello, Mark Agarunov!

Thank you for the answer.
I post the files you asked for.
WiredTiger.turtle WiredTiger.wt

In addition, I sent a copy of the data to your mail.

Thanks, Zolotov Pavel.

Comment by Mark Agarunov [ 01/Dec/17 ]

Hello Azriel,

Thank you for the report. Please provide the WiredTiger.wt and WiredTiger.turtle files so that we can attempt a repair of the database. Please use the earliest version of the dataset you have, before any repair/restoration attempts. However, please keep in mind that this is not a guaranteed fix.

In addition, please provide the logs and output from trying to start mongod.

Thanks,
Mark

Comment by Zolotov Pavel [ 01/Dec/17 ]

Sorry for my formatting.
I dublacate log and traces in attachments.
I haven't found the way to edit issue description.

Generated at Thu Feb 08 04:29:17 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.