|
From the backup methods documentation (emphasis mine):
If your storage system does not support snapshots, you can copy the files directly using cp, rsync, or a similar tool. Since copying multiple files is not an atomic operation, you must stop all writes to the mongod before copying the files. Otherwise, you will copy the files in an invalid state.
The best way to stop all writes is to stop mongod. If that's not an option, then one needs to use db.fsyncLock().
Cheers,
Ramón.
|
|
Hello Ramón,
Many thanks for the reply and your complete information. You are right, the main reason cannot be reproduced anymore due to change of states in my situation. I have one question about backup/replica set/WiredTiger:
I have followed every steps of production notes and adopted every time they've changed through the recent years. My question is about backing up by cp/rsync in replica sets. Even the MongoDB page mentions this is a possible act in replica sets especially if the mongod is not changing the data and there are no new changes for more consistency. I even got the feelings due to not having performance impact this could be even encouraged way of backing up after live snapshots. I didn't see any warning in official MongoDB page about "One can't just rsync files from one node to the other with WiredTiger". Just wanted to be sure that this is not an option with WiredTiger in replica set environments.
I will follow your steps by moving the data in more stable storage which has more room and more reliable RAID backend to recover data by --repair.
Many thanks to all for your time, support and informative suggestions.
Best,
Maziyar
|
|
maziyar, let me see if I can summarize this ticket:
- The initial error you encountered looked suspicious:
2016-03-27T23:25:21.525+0200 I - [rsBackgroundSync] Fatal assertion 18750 UnrecoverableRollbackError: need to rollback, but in inconsistent state. minvalid: (term: 23, timestamp: Mar 25 05:29:07:22) > our last optime: (term: 22, timestamp: Mar 25 05:28:38:29)
|
but since then you've made many changes to your system and this error no longer appears, so we can no longer investigate it.
- You've tried to resync the affected secondary, which triggered the following error on the primary:
2016-03-28T14:31:33.477+0200 I COMMAND [conn222] getmore test.AUTweets_2014 cursorid:41661380455 ntoreturn:0 exhaust:1 keyUpdates:0 writeConflicts:0 numYields:37 nreturned:4779 reslen:4195383 locks:{ Global: { acquireCount: { r: 76 } }, Database: { acquireCount: { r: 38 } }, Collection: { acquireCount: { r: 38 } } } 278ms
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (0) [1459168300:246594][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: snappy error: snappy_decompress: SNAPPY_INVALID_INPUT: 1
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (-31802) [1459168300:246811][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: block decryption failed: WT_ERROR: non-specific WiredTiger error
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (0) [1459168300:246880][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: file:collection-11--5374230615308943049.wt: encountered an illegal file format or internal value
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (-31804) [1459168300:246911][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: the process must exit and restart: WT_PANIC: WiredTiger library panic
|
2016-03-28T14:31:40.246+0200 I - [conn222] Fatal Assertion 28558
|
This error indicates data corruption on your primary node in the test.AUTweets_2014 collection, stored in file collection-11--5374230615308943049.wt, and is most often caused by a problematic storage layer.
- You've also seen similar corruption for other collections:
2016-03-29T18:24:08.115+0200 E STORAGE [conn12237] WiredTiger (0) [1459268648:112762][29031:0x7f3b6e5fc700], file:collection-50--5899310941851042561.wt, WT_CURSOR.next: read checksum error for 16384B block at offset 1544142848: calculated block checksum of 2131774677 doesn't match expected checksum of 1261529629
|
2016-03-29T18:24:08.116+0200 E STORAGE [conn12237] WiredTiger (0) [1459268648:116058][29031:0x7f3b6e5fc700], file:collection-50--5899310941851042561.wt, WT_CURSOR.next: collection-50--5899310941851042561.wt: encountered an illegal file format or internal value
|
2016-03-29T18:24:08.116+0200 E STORAGE [conn12237] WiredTiger (-31804) [1459268648:116232][29031:0x7f3b6e5fc700], file:collection-50--5899310941851042561.wt, WT_CURSOR.next: the process must exit and restart: WT_PANIC: WiredTiger library panic
|
2016-03-29T18:24:08.116+0200 I - [conn12237] Fatal Assertion 28558
|
- You've tried using rsync to copy files from the primary to the secondary, and seen errors in the WiredTiger metadata files:
2016-03-30T15:55:50.197+0200 E STORAGE [initandlisten] WiredTiger (0) [1459346150:197300][9768:0x7f46e0016d00], file:WiredTiger.wt, connection: read checksum error for 4096B block at offset 69632: block header checksum of 1717533029 doesn't match expected checksum of 2444587262
|
2016-03-30T15:55:50.197+0200 E STORAGE [initandlisten] WiredTiger (0) [1459346150:197443][9768:0x7f46e0016d00], file:WiredTiger.wt, connection: WiredTiger.wt: encountered an illegal file format or internal value
|
2016-03-30T15:55:50.197+0200 E STORAGE [initandlisten] WiredTiger (-31804) [1459346150:197462][9768:0x7f46e0016d00], file:WiredTiger.wt, connection: the process must exit and restart: WT_PANIC: WiredTiger library panic
|
2016-03-30T15:55:50.197+0200 I - [initandlisten] Fatal Assertion 28558
|
Here's what I can conclude after examining all the data above:
- The fact that you're seeing corruption in many different collections is a strong indicator that there's a problem with your storage layer. Any further recovery steps you take will be pointless if the storage layers can't make guarantees about data integrity.
- I'd recommend you revisit our Production Notes, specially the part about filesystems and adjust your system accordingly.
- One can't just rsync files from one node to the other with WiredTiger, you need to use initial sync to resync a member of a replica set.
- I'd also recommend you look at the documentation for backup methods. If all your data is stored on the same system (which seems to be the case here) and that system fails or is not reliable enough you may lose data.
I think you may want to consider downtime to recover as much of your system as possible; after shutting down your mongod, you can copy your files to a different system that doesn't use this storage layer. Once the files have been copied to this new system, you can use mongod --repair to recover as much data as possible. After that you may be able to start a new replica set with the recovered data. All of this needs to happen on reliable storage.
Finally, since none of the data above points to a bug in the server, and the SERVER project is for reporting bugs or feature suggestions for the MongoDB server, I'm going to close this ticket. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag, where your question will reach a larger audience. Questions like yours involving more discussion would be best posted on the mongodb-user group. See also our Technical Support page for additional support resources.
Regards,
Ramón.
|
|
Update:
I finished the rsync from primary to secondary it gives me this error on the secondary when I try to start the mongod. Is there a way to start this machine so I take one offline and do the repair while the one is working and then sync it with fresh data so I can take the other one down? Does this method work? Because last time I repair this secondary and joined it I got rsBackgroundSync Failure. Maybe run repair in shell then sync it (afraid of crashing and not be able to fully repair)
mongod --storageEngine wiredTiger --dbpath /data/ --replSet rs0 --fork --logpath /home/maziyar/fork.log
|
2016-03-30T15:55:50.161+0200 I CONTROL [initandlisten] MongoDB starting : pid=9768 port=27017 dbpath=/data/ 64-bit host=mongodb-replica1
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] db version v3.2.4
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] git version: e2ee9ffcf9f5a94fad76802e28cc978718bb7a30
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] allocator: tcmalloc
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] modules: none
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] build environment:
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] distmod: ubuntu1404
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] distarch: x86_64
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] target_arch: x86_64
|
2016-03-30T15:55:50.162+0200 I CONTROL [initandlisten] options: { processManagement: { fork: true }, replication: { replSet: "rs0" }, storage: { dbPath: "/data/", engine: "wiredTiger" }, systemLog: { destination: "file", path: "/home/maziyar/fork.log" } }
|
2016-03-30T15:55:50.173+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=46G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
|
2016-03-30T15:55:50.197+0200 E STORAGE [initandlisten] WiredTiger (0) [1459346150:197300][9768:0x7f46e0016d00], file:WiredTiger.wt, connection: read checksum error for 4096B block at offset 69632: block header checksum of 1717533029 doesn't match expected checksum of 2444587262
|
2016-03-30T15:55:50.197+0200 E STORAGE [initandlisten] WiredTiger (0) [1459346150:197443][9768:0x7f46e0016d00], file:WiredTiger.wt, connection: WiredTiger.wt: encountered an illegal file format or internal value
|
2016-03-30T15:55:50.197+0200 E STORAGE [initandlisten] WiredTiger (-31804) [1459346150:197462][9768:0x7f46e0016d00], file:WiredTiger.wt, connection: the process must exit and restart: WT_PANIC: WiredTiger library panic
|
2016-03-30T15:55:50.197+0200 I - [initandlisten] Fatal Assertion 28558
|
2016-03-30T15:55:50.197+0200 I - [initandlisten]
|
|
***aborting after fassert() failure
|
|
2016-03-30T15:55:50.220+0200 F - [initandlisten] Got signal: 6 (Aborted).
|
----- BEGIN BACKTRACE ----- [18/9943]
|
{"backtrace":[{"b":"400000","o":"EF3502","s":"_ZN5mongo15printStackTraceERSo"},{"b":"400000","o":"EF2659"},{"b":"400000","o":"EF2E62"},{"b":"7F46DE97E000","o":"10340"},{"b":"7F46DE5B9000","o":"36CC9","s":"gsignal"},{"b":"7F46DE5B9000","o":"3A0D8","s":"abort"},{"b":"400000","o":"E7D9D2","s":"_ZN5mongo13fassertFailedEi"},{"b":"400000","o":"C78EF3"},{"b":"400000","
|
o":"16378EC","s":"__wt_eventv"},{"b":"400000","o":"1637A8D","s":"__wt_err"},{"b":"400000","o":"1637E74","s":"__wt_panic"},{"b":"400000","o":"156F1AC","s":"__wt_block_extlist_read"},{"b":"400000","o":"156F723","s":"__wt_block_extlist_read_avail"},{"b":"400000","o":"156C707","s":"__wt_block_checkpoint_load"},{"b":"400000","o":"1570519"},{"b":"400000","o":"158CCD8"
|
,"s":"__wt_btree_open"},{"b":"400000","o":"15C1F50","s":"__wt_conn_btree_open"},{"b":"400000","o":"16366BB","s":"__wt_session_get_btree"},{"b":"400000","o":"1636BEE","s":"__wt_session_get_btree"},{"b":"400000","o":"1636D1B","s":"__wt_session_get_btree_ckpt"},{"b":"400000","o":"15CFF68","s":"__wt_curfile_open"},{"b":"400000","o":"1633E35"},{"b":"400000","o":"1600
|
3FF","s":"__wt_metadata_cursor_open"},{"b":"400000","o":"16004DE","s":"__wt_metadata_cursor"},{"b":"400000","o":"15BEE80","s":"wiredtiger_open"},{"b":"400000","o":"C61682","s":"_ZN5mongo18WiredTigerKVEngineC2ERKSsS2_S2_mbbb"},{"b":"400000","o":"C5DB73"},{"b":"400000","o":"B8A458","s":"_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv"},{"b":"40000
|
0","o":"59503B","s":"_ZN5mongo13initAndListenEi"},{"b":"400000","o":"54FE7D","s":"main"},{"b":"7F46DE5B9000","o":"21EC5","s":"__libc_start_main"},{"b":"400000","o":"59299C"}],"processInfo":{ "mongodbVersion" : "3.2.4", "gitVersion" : "e2ee9ffcf9f5a94fad76802e28cc978718bb7a30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.13.0-83-generi
|
c", "version" : "#127-Ubuntu SMP Fri Mar 11 00:25:37 UTC 2016", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "EF46210F8976780D45B811C3540FECB9E734EABE" }, { "b" : "7FFDD3C45000", "elfType" : 3, "buildId" : "1D19170C08321625CE9BFC6C1CE5497942874E90" }, { "b" : "7F46DFBA4000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "e
|
lfType" : 3, "buildId" : "E21720F2804EF30440F2B39CD409252C26F58F73" }, { "b" : "7F46DF7C8000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "9BC22F9457E3D7E9CF8DDC135C0DAC8F7742135D" }, { "b" : "7F46DF5C0000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "B376100CAB1EAC4E5DE066EACFC282BF7C0B54F3" }, {
|
"b" : "7F46DF3BC000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "67699FFDA9FD2A552032E0652A242E82D65AA10D" }, { "b" : "7F46DF0B8000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "D0E735DBECD63462DA114BD3F76E6EC7BB1FACCC" }, { "b" : "7F46DEDB2000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfT
|
ype" : 3, "buildId" : "EF3F6DFFA1FBE48436EC6F45CD3AABA157064BB4" }, { "b" : "7F46DEB9C000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "36311B4457710AE5578C4BF00791DED7359DBB92" }, { "b" : "7F46DE97E000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "AF06068681750736E0524DF17D5A86CB2C3F765C" }, { "b
|
" : "7F46DE5B9000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "5382058B69031CAA9B9996C11061CD164C9398FF" }, { "b" : "7F46DFE03000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "2A816C3EBBA4E12813FBD34B06FBD25BC892A67F" } ] }}
|
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12f3502]
|
mongod(+0xEF2659) [0x12f2659]
|
mongod(+0xEF2E62) [0x12f2e62]
|
libpthread.so.0(+0x10340) [0x7f46de98e340]
|
libc.so.6(gsignal+0x39) [0x7f46de5efcc9]
|
libc.so.6(abort+0x148) [0x7f46de5f30d8]
|
mongod(_ZN5mongo13fassertFailedEi+0x82) [0x127d9d2]
|
mongod(+0xC78EF3) [0x1078ef3]
|
mongod(__wt_eventv+0x40C) [0x1a378ec]
|
mongod(__wt_err+0x8D) [0x1a37a8d]
|
mongod(__wt_panic+0x24) [0x1a37e74]
|
mongod(__wt_block_extlist_read+0x6C) [0x196f1ac]
|
mongod(__wt_block_extlist_read_avail+0x33) [0x196f723]
|
mongod(__wt_block_checkpoint_load+0x3B7) [0x196c707]
|
mongod(+0x1570519) [0x1970519]
|
mongod(__wt_btree_open+0xC68) [0x198ccd8]
|
mongod(__wt_conn_btree_open+0x140) [0x19c1f50]
|
mongod(__wt_session_get_btree+0xEB) [0x1a366bb]
|
mongod(__wt_session_get_btree+0x61E) [0x1a36bee]
|
mongod(__wt_session_get_btree_ckpt+0xAB) [0x1a36d1b]
|
mongod(__wt_curfile_open+0x218) [0x19cff68]
|
mongod(+0x1633E35) [0x1a33e35]
|
mongod(__wt_metadata_cursor_open+0x5F) [0x1a003ff]
|
mongod(__wt_metadata_cursor+0x7E) [0x1a004de]
|
mongod(wiredtiger_open+0x1600) [0x19bee80]
|
mongod(_ZN5mongo18WiredTigerKVEngineC2ERKSsS2_S2_mbbb+0x562) [0x1061682]
|
mongod(+0xC5DB73) [0x105db73]
|
mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x598) [0xf8a458]
|
mongod(_ZN5mongo13initAndListenEi+0x37B) [0x99503b]
|
mongod(main+0x15D) [0x94fe7d]
|
libc.so.6(__libc_start_main+0xF5) [0x7f46de5daec5]
|
mongod(+0x59299C) [0x99299c]
|
----- END BACKTRACE -----
|
I uploaded WiredTiger.turtle and WiredTiger.wt just in case if it helps.
Many thanks,
Best,
Maziyar
|
|
Hi again,
I just did validate() on small collections (5-20 million docs). It is not full but still gives me these errors to know they are corrupted. Now let's say these collections are corrupted, how can I recover this and fix them? by repair the whole db or is there a way to do it for specific collections? And is it possible to db.runCommand repair inside this running mongod since it is the only instance and there is no way to join another member? I did rsync the files to another instance. Some corrupted collections are no longer needed, is dropping them will help in anyway to speed up the recovery?
My disk situation (considering repair):
Size Used Avail Use% Mounted on
|
3.0T 1.6T 1.3T 55% /data
|
|
rs0:PRIMARY> db.cop21_2015.validate()
|
{
|
"ns" : "test.cop21_2015",
|
"nIndexes" : 1,
|
"keysPerIndex" : {
|
"test.cop21_2015.$_id_" : 6883456
|
},
|
"valid" : false,
|
"errors" : [
|
"[1459330562:308911][12706:0x7f26e7328700], file:collection-93--5374230615308943049.wt, WT_SESSION.verify: snappy error: snappy_decompress: SNAPPY_INVALID_INPUT: 1",
|
"[1459330562:739749][12706:0x7f26e7328700], file:collection-93--5374230615308943049.wt, WT_SESSION.verify: checkpoint ranges never verified: 578",
|
"[1459330562:761693][12706:0x7f26e7328700], file:collection-93--5374230615308943049.wt, WT_SESSION.verify: file ranges never verified: 577",
|
"verify() returned WT_ERROR: non-specific WiredTiger error. This indicates structural damage. Not examining individual documents.",
|
"number of _id index entries (6883456) does not match the number of documents (6882406)"
|
],
|
"warning" : "Some checks omitted for speed. use {full:true} option to do more thorough scan.",
|
"advice" : "ns corrupt. See http://dochub.mongodb.org/core/data-recovery",
|
"ok" : 1
|
}
|
Many thanks again,
Best,
Maziyar
|
|
Hi Dan and Ramon,
The dmesg for the two mongodb instances (primary and the secondary) didn't indicate anything storage related unfortunately.
1. These machines are KVM VM that has mounted raw images each located on RAID 5 - SSDs (6x Samsung 1TB - EVO edition sadly). The RAID is mounted as ext4 and also inside the KVM machine the FS for the attached images are ext4 as well.
2. Not at all, this mongodb is almost insert intensive rather than update/modify. Although some collections are constantly being updated(insert), but with TTL and by latest native Node Mongodb driver.
3. and 4. I am ashamed to say that no I don't have a backup strategy due to storage limitation. I just learned that the replication is not a backup in the hard way! Do you suggest simple cp/rsync from files or mongodump?
I am going to start validate() one by one over my collections. Do I need to do fsck.ext4 or this is at the Mongodb storage level?
Many thanks for your helps.
Best,
maziyar
|
|
maziyar, the errors previous to
indicate that your data files are corrupted. The most common case for this issue is flaky storage, but it can also happen if the data files are modified by programs other than mongod, specially when mongod is running. I have assembled a number of questions to get a better idea of your system configuration and data storage:
- What kind of underlying storage mechanism are you using? Are the storage devices attached locally or over the network? Are the disks SSDs or HDDs? What kind of RAID and/or volume management system are you using?
- Have you manipulated (copied or moved) the underling database files?
- What method do you use to create backups?
- Have you ever restored this instance from backups?
At this stage I'd recommend you make a backup of all your data files and use validate() to find out where the corruption is. Please also examine your system logs as well for errors coming from the storage layer.
|
|
This indicates that there is an error in your underlying storage. Have you checked dmesg for any storage related error messages?
|
|
Hi again,
I ran the same steps (remove everything from data path, even create a new disk and did the mkfs etc. and added to the replica set). After few minutes it happened again and there was nothing in dmesg on both machines.
On the primary that crashed:
2016-03-29T18:24:08.115+0200 E STORAGE [conn12237] WiredTiger (0) [1459268648:112762][29031:0x7f3b6e5fc700], file:collection-50--5899310941851042561.wt, WT_CURSOR.next: read checksum error for 16384B block at offset 1544142848: calculated block checksum of 2131774677 doesn't match expected checksum of 1261529629
|
2016-03-29T18:24:08.116+0200 E STORAGE [conn12237] WiredTiger (0) [1459268648:116058][29031:0x7f3b6e5fc700], file:collection-50--5899310941851042561.wt, WT_CURSOR.next: collection-50--5899310941851042561.wt: encountered an illegal file format or internal value
|
2016-03-29T18:24:08.116+0200 E STORAGE [conn12237] WiredTiger (-31804) [1459268648:116232][29031:0x7f3b6e5fc700], file:collection-50--5899310941851042561.wt, WT_CURSOR.next: the process must exit and restart: WT_PANIC: WiredTiger library panic
|
2016-03-29T18:24:08.116+0200 I - [conn12237] Fatal Assertion 28558
|
2016-03-29T18:24:08.116+0200 I - [conn12237]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.121+0200 I - [conn12178] Fatal Assertion 28559
|
2016-03-29T18:24:08.116+0200 I - [conn12237]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.121+0200 I - [conn12178] Fatal Assertion 28559
|
2016-03-29T18:24:08.121+0200 I - [conn12178]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.131+0200 I - [conn12142] Fatal Assertion 28559
|
2016-03-29T18:24:08.131+0200 I - [conn12142]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.136+0200 I - [conn12157] Fatal Assertion 28559
|
2016-03-29T18:24:08.136+0200 I - [conn12157]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.139+0200 I - [conn12179] Fatal Assertion 28559
|
2016-03-29T18:24:08.139+0200 I - [conn12179]
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.155+0200 I - [WTJournalFlusher] Fatal Assertion 28559
|
2016-03-29T18:24:08.155+0200 I - [conn12180] Fatal Assertion 28559
|
2016-03-29T18:24:08.156+0200 I - [WTJournalFlusher]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.156+0200 I - [conn12180]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.156+0200 I - [conn12143] Fatal Assertion 28559
|
2016-03-29T18:24:08.156+0200 I - [conn12143]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.157+0200 I - [conn12158] Fatal Assertion 28559
|
2016-03-29T18:24:08.157+0200 I - [conn12158]
|
|
***aborting after fassert() failure
|
2016-03-29T18:24:08.163+0200 I - [conn12176] Fatal Assertion 28559
|
2016-03-29T18:24:08.163+0200 I - [conn12176]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.168+0200 I - [conn12177] Fatal Assertion 28559
|
2016-03-29T18:24:08.168+0200 I - [conn12177]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.179+0200 I - [conn12172] Fatal Assertion 28559
|
2016-03-29T18:24:08.179+0200 I - [conn12172]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.195+0200 I - [conn12148] Fatal Assertion 28559
|
2016-03-29T18:24:08.195+0200 I - [conn12148]
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.208+0200 I - [conn12173] Fatal Assertion 28559
|
2016-03-29T18:24:08.209+0200 I - [conn12150] Fatal Assertion 28559
|
2016-03-29T18:24:08.209+0200 I - [conn12173]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.209+0200 I - [conn12150]
|
|
***aborting after fassert() failure
|
|
|
2016-03-29T18:24:08.214+0200 F - [conn12237] Got signal: 6 (Aborted).
|
|
0x12f3502 0x12f2659 0x12f2e62 0x7f3b95e4f340 0x7f3b95ab0cc9 0x7f3b95ab40d8 0x127d9d2 0x1078ef3 0x1a378ec 0x1a37a8d 0x1a37e74 0x1971b16 0x198e695 0x1993d90 0x19a9e7f 0x19ac731 0x1976fd8 0x19cd906 0x106e86c 0xbc8f58 0xe0ac85 0xe0b349 0xdc8e22 0xdc9521 0xcadd09 0xcb3fa5 0x99974c 0x12a0ebd 0x7f3b95e47182 0x7f3b95b7447d
|
----- BEGIN BACKTRACE -----
|
{"backtrace":[{"b":"400000","o":"EF3502","s":"_ZN5mongo15printStackTraceERSo"},{"b":"400000","o":"EF2659"},{"b":"400000","o":"EF2E62"},{"b":"7F3B95E3F000","o":"10340"},{"b":"7F3B95A7A000","o":"36CC9","s":"gsignal"},{"b":"7F3B95A7A000","o":"3A0D8","s":"abort"},{"b":"400000","o":"E7D9D2","s":"_ZN5mongo13fassertFailedEi"},{"b":"400000","o":"C78EF3"},{"b":"400000","
|
o":"16378EC","s":"__wt_eventv"},{"b":"400000","o":"1637A8D","s":"__wt_err"},{"b":"400000","o":"1637E74","s":"__wt_panic"},{"b":"400000","o":"1571B16","s":"__wt_bm_read"},{"b":"400000","o":"158E695","s":"__wt_bt_read"},{"b":"400000","o":"1593D90","s":"__wt_page_in_func"},{"b":"400000","o":"15A9E7F"},{"b":"400000","o":"15AC731","s":"__wt_tree_walk"},{"b":"400000",
|
"o":"1576FD8","s":"__wt_btcur_next"},{"b":"400000","o":"15CD906"},{"b":"400000","o":"C6E86C","s":"_ZN5mongo21WiredTigerRecordStore6Cursor4nextEv"},{"b":"400000","o":"7C8F58","s":"_ZN5mongo14CollectionScan4workEPm"},{"b":"400000","o":"A0AC85","s":"_ZN5mongo12PlanExecutor11getNextImplEPNS_11SnapshottedINS_7BSONObjEEEPNS_8RecordIdE"},{"b":"400000","o":"A0B349","s":
|
"_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_8RecordIdE"},{"b":"400000","o":"9C8E22"},{"b":"400000","o":"9C9521","s":"_ZN5mongo7getMoreEPNS_16OperationContextEPKcixPbS4_"},{"b":"400000","o":"8ADD09","s":"_ZN5mongo15receivedGetMoreEPNS_16OperationContextERNS_10DbResponseERNS_7MessageERNS_5CurOpE"},{"b":"400000","o":"8B3FA5","s":"_ZN5mongo16assembleResponseEP
|
NS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE"},{"b":"400000","o":"59974C","s":"_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE"},{"b":"400000","o":"EA0EBD","s":"_ZN5mongo17PortMessageServer17handleIncomingMsgEPv"},{"b":"7F3B95E3F000","o":"8182"},{"b":"7F3B95A7A000","o":"FA47D","s":"clone"}],"processInfo":{
|
"mongodbVersion" : "3.2.4", "gitVersion" : "e2ee9ffcf9f5a94fad76802e28cc978718bb7a30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.13.0-32-generic", "version" : "#57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "EF46210F8976780D45B811C3540FECB9E734EABE" },
|
{ "b" : "7FFFC04FE000", "elfType" : 3, "buildId" : "E464DBB7341B7B9E7874DC0619C5F429416E6AC6" }, { "b" : "7F3B97065000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "E21720F2804EF30440F2B39CD409252C26F58F73" }, { "b" : "7F3B96C89000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "9BC22F9457E3D7E
|
9CF8DDC135C0DAC8F7742135D" }, { "b" : "7F3B96A81000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "B376100CAB1EAC4E5DE066EACFC282BF7C0B54F3" }, { "b" : "7F3B9687D000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "67699FFDA9FD2A552032E0652A242E82D65AA10D" }, { "b" : "7F3B96579000", "path" : "/usr/lib/x86_64-
|
linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "D0E735DBECD63462DA114BD3F76E6EC7BB1FACCC" }, { "b" : "7F3B96273000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "EF3F6DFFA1FBE48436EC6F45CD3AABA157064BB4" }, { "b" : "7F3B9605D000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "36311B4457710AE5578C4BF
|
00791DED7359DBB92" }, { "b" : "7F3B95E3F000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "AF06068681750736E0524DF17D5A86CB2C3F765C" }, { "b" : "7F3B95A7A000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "5382058B69031CAA9B9996C11061CD164C9398FF" }, { "b" : "7F3B972C4000", "path" : "/lib64/ld-linux-x86-
|
64.so.2", "elfType" : 3, "buildId" : "2A816C3EBBA4E12813FBD34B06FBD25BC892A67F" } ] }}
|
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12f3502]
|
mongod(+0xEF2659) [0x12f2659]
|
mongod(+0xEF2E62) [0x12f2e62]
|
libpthread.so.0(+0x10340) [0x7f3b95e4f340]
|
libc.so.6(gsignal+0x39) [0x7f3b95ab0cc9]
|
libc.so.6(abort+0x148) [0x7f3b95ab40d8]
|
mongod(_ZN5mongo13fassertFailedEi+0x82) [0x127d9d2]
|
mongod(+0xC78EF3) [0x1078ef3]
|
mongod(__wt_eventv+0x40C) [0x1a378ec]
|
mongod(__wt_err+0x8D) [0x1a37a8d]
|
mongod(__wt_panic+0x24) [0x1a37e74]
|
mongod(__wt_bm_read+0x76) [0x1971b16]
|
mongod(__wt_bt_read+0x85) [0x198e695]
|
mongod(__wt_page_in_func+0x180) [0x1993d90]
|
mongod(+0x15A9E7F) [0x19a9e7f]
|
mongod(__wt_tree_walk+0xCA1) [0x19ac731]
|
mongod(__wt_btcur_next+0x338) [0x1976fd8]
|
mongod(+0x15CD906) [0x19cd906]
|
mongod(_ZN5mongo21WiredTigerRecordStore6Cursor4nextEv+0x2AC) [0x106e86c]
|
mongod(_ZN5mongo14CollectionScan4workEPm+0x968) [0xbc8f58]
|
mongod(_ZN5mongo12PlanExecutor11getNextImplEPNS_11SnapshottedINS_7BSONObjEEEPNS_8RecordIdE+0x275) [0xe0ac85]
|
mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_8RecordIdE+0x39) [0xe0b349]
|
mongod(+0x9C8E22) [0xdc8e22]
|
mongod(_ZN5mongo7getMoreEPNS_16OperationContextEPKcixPbS4_+0x531) [0xdc9521]
|
mongod(_ZN5mongo15receivedGetMoreEPNS_16OperationContextERNS_10DbResponseERNS_7MessageERNS_5CurOpE+0x1A9) [0xcadd09]
|
mongod(_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xE35) [0xcb3fa5]
|
mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE+0xEC) [0x99974c]
|
mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x26D) [0x12a0ebd]
|
libpthread.so.0(+0x8182) [0x7f3b95e47182]
|
libc.so.6(clone+0x6D) [0x7f3b95b7447d]
|
----- END BACKTRACE -----
|
Many thanks,
Best,
Maziyar
|
|
Hi Dan,
Thanks for the reply. I am rsyncing the data from primary to the secondary since it didn't let it join and do full sync. This is how I start my members in replica set:
mongod --storageEngine wiredTiger --dbpath /data/ --replSet rs0 --fork --logpath /home/maziyar/fork.log
|
rs0:PRIMARY> db.stats()
|
{
|
"db" : "test",
|
"collections" : 77,
|
"objects" : 3988402205,
|
"avgObjSize" : 787.1947313117083,
|
"dataSize" : 3139649202128,
|
"storageSize" : 1603551313920,
|
"numExtents" : 0,
|
"indexes" : 205,
|
"indexSize" : 50976239616,
|
"ok" : 1
|
}
|
I removed the secondary (former primary) since it was't working.
rs0:PRIMARY> rs.conf()
|
{
|
"_id" : "rs0",
|
"version" : 468956,
|
"protocolVersion" : NumberLong(1),
|
"members" : [
|
{
|
"_id" : 2,
|
"host" : "mongodb-replica2:27017",
|
"arbiterOnly" : false,
|
"buildIndexes" : true,
|
"hidden" : false,
|
"priority" : 3,
|
"tags" : {
|
|
},
|
"slaveDelay" : NumberLong(0),
|
"votes" : 1
|
},
|
{
|
"_id" : 3,
|
"host" : "mongodb-arbiter:30000",
|
"arbiterOnly" : true,
|
"buildIndexes" : true,
|
"hidden" : false,
|
"priority" : 1,
|
"tags" : {
|
|
},
|
"slaveDelay" : NumberLong(0),
|
"votes" : 1
|
}
|
],
|
"settings" : {
|
"chainingAllowed" : true,
|
"heartbeatIntervalMillis" : 2000,
|
"heartbeatTimeoutSecs" : 10,
|
"electionTimeoutMillis" : 10000,
|
"getLastErrorModes" : {
|
|
},
|
"getLastErrorDefaults" : {
|
"w" : 1,
|
"wtimeout" : 0
|
}
|
}
|
}
|
As the encryption, I don't use any software nor hardware solution for this. I even remember that the home/root is not encrypted.
Many thanks Dan.
|
|
The error you're seeing on the primary indicates that you are running with the encrypted storage engine enabled and that there was an error decompressing the data read from disk.
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (0) [1459168300:246594][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: snappy error: snappy_decompress: SNAPPY_INVALID_INPUT: 1
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (-31802) [1459168300:246811][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: block decryption failed: WT_ERROR: non-specific WiredTiger error
|
Have you checked dmesg for any storage related errors?
Also, can you share your mongod config file or startup parameters?
|
|
Hi again,
Some details. I couldn't wait so I wiped the data directory (the one that I repaired and couldn't joined). So now I try to do full resync and every time I join this member to replica set and then rs.add() it to the primary, it crashes the primary with this error after few seconds. I thought since it's a same machine this might help understanding the cause of rsBackgroundSync failure as well.
Now it seems there is something wrong with the primary that doesn't allow newly fresh member joins the replica set and start the sync.
2016-03-28T14:31:31.848+0200 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after connections: 0, after extra_info: 2750, after globalLock: 2750, after locks: 2750, after network: 2750, after opcounters: 2750, after opcountersRepl: 2750, after repl: 2750, after storageEngine: 2750, after tcmalloc: 2750, after wiredTiger: 2750,
|
at end: 2760 }
|
2016-03-28T14:31:31.848+0200 I COMMAND [conn213] serverStatus was very slow: { after basic: 0, after asserts: 0, after connections: 0, after extra_info: 3420, after globalLock: 3420, after locks: 3420, after network: 3420, after opcounters: 3420, after opcountersRepl: 3420, after repl: 3430, after storageEngine: 3430, after tcmalloc: 3430, after wiredTiger: 343
|
0, at end: 3440 }
|
2016-03-28T14:31:31.849+0200 I COMMAND [conn213] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:18558 locks:{} protocol:op_query 3538ms
|
2016-03-28T14:31:31.882+0200 I COMMAND [conn222] getmore test.AUTweets_2014 cursorid:41661380455 ntoreturn:0 exhaust:1 keyUpdates:0 writeConflicts:0 numYields:37 nreturned:4755 reslen:4195021 locks:{ Global: { acquireCount: { r: 76 } }, Database: { acquireCount: { r: 38 } }, Collection: { acquireCount: { r: 38 } } } 3499ms
|
2016-03-28T14:31:33.477+0200 I COMMAND [conn222] getmore test.AUTweets_2014 cursorid:41661380455 ntoreturn:0 exhaust:1 keyUpdates:0 writeConflicts:0 numYields:37 nreturned:4779 reslen:4195383 locks:{ Global: { acquireCount: { r: 76 } }, Database: { acquireCount: { r: 38 } }, Collection: { acquireCount: { r: 38 } } } 278ms
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (0) [1459168300:246594][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: snappy error: snappy_decompress: SNAPPY_INVALID_INPUT: 1
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (-31802) [1459168300:246811][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: block decryption failed: WT_ERROR: non-specific WiredTiger error
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (0) [1459168300:246880][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: file:collection-11--5374230615308943049.wt: encountered an illegal file format or internal value
|
2016-03-28T14:31:40.246+0200 E STORAGE [conn222] WiredTiger (-31804) [1459168300:246911][28631:0x7fd381fd1700], file:collection-11--5374230615308943049.wt, WT_CURSOR.next: the process must exit and restart: WT_PANIC: WiredTiger library panic
|
2016-03-28T14:31:40.246+0200 I - [conn222] Fatal Assertion 28558
|
2016-03-28T14:31:40.247+0200 I - [conn222]
|
|
***aborting after fassert() failure
|
2016-03-28T14:31:40.266+0200 I - [conn159] Fatal Assertion 28559
|
2016-03-28T14:31:40.266+0200 I - [conn159]
|
|
***aborting after fassert() failure
|
|
|
2016-03-28T14:31:40.269+0200 F - [conn222] Got signal: 6 (Aborted).
|
|
0x12f3502 0x12f2659 0x12f2e62 0x7fd3a9824340 0x7fd3a9485cc9 0x7fd3a94890d8 0x127d9d2 0x1078ef3 0x1a378ec 0x1a37a8d 0x1a37e74 0x198e89c 0x1993d90 0x19a9e7f 0x19ac731 0x1976fd8 0x19cd906 0x106e86c 0xbc8f58 0xe0ac85 0xe0b349 0xdc8e22 0xdc9521 0xcadd09 0xcb3fa5 0x99974c 0x12a0ebd 0x7fd3a981c182 0x7fd3a954947d
|
|
----- BEGIN BACKTRACE -----
|
{"backtrace":[{"b":"400000","o":"EF3502","s":"_ZN5mongo15printStackTraceERSo"},{"b":"400000","o":"EF2659"},{"b":"400000","o":"EF2E62"},{"b":"7F4C83CDA000","o":"10340"},{"b":"7F4C83915000","o":"36CC9","s":"gsignal"},{"b":"7F4C83915000","o":"3A0D8","s":"abort"},{"b":"400000","o":"E7D9D2","s":"_ZN5mongo13fassertFailedEi"},{"b":"400000","o":"C78EF3"},{"b":"400000","o":"16378EC","s":"__wt_eventv"},{"b":"400000","o":"1637A8D","s":"__wt_err"},{"b":"400000","o":"1637E74","s":"__wt_panic"},{"b":"400000","o":"158E89C","s":"__wt_bt_read"},{"b":"400000","o":"1593D90","s":"__wt_page_in_func"},{"b":"400000","o":"15A9E7F"},{"b":"400000","o":"15AC731","s":"__wt_tree_walk"},{"b":"400000","o":"1576FD8","s":"__wt_btcur_next"},{"b":"400000","o":"15CD906"},{"b":"400000","o":"C6E86C","s":"_ZN5mongo21WiredTigerRecordStore6Cursor4nextEv"},{"b":"400000","o":"7C8F58","s":"_ZN5mongo14CollectionScan4workEPm"},{"b":"400000","o":"A0AC85","s":"_ZN5mongo12PlanExecutor11getNextImplEPNS_11SnapshottedINS_7BSONObjEEEPNS_8RecordIdE"},{"b":"400000","o":"A0B349","s":"_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_8RecordIdE"},{"b":"400000","o":"9C8E22"},{"b":"400000","o":"9C9521","s":"_ZN5mongo7getMoreEPNS_16OperationContextEPKcixPbS4_"},{"b":"400000","o":"8ADD09","s":"_ZN5mongo15receivedGetMoreEPNS_16OperationContextERNS_10DbResponseERNS_7MessageERNS_5CurOpE"},{"b":"400000","o":"8B3FA5","s":"_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE"},{"b":"400000","o":"59974C","s":"_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE"},{"b":"400000","o":"EA0EBD","s":"_ZN5mongo17PortMessageServer17handleIncomingMsgEPv"},{"b":"7F4C83CDA000","o":"8182"},{"b":"7F4C83915000","o":"FA47D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.2.4", "gitVersion" : "e2ee9ffcf9f5a94fad76802e28cc978718bb7a30", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.13.0-32-generic", "version" : "#57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "EF46210F8976780D45B811C3540FECB9E734EABE" }, { "b" : "7FFFCD6FE000", "elfType" : 3, "buildId" : "E464DBB7341B7B9E7874DC0619C5F429416E6AC6" }, { "b" : "7F4C84F00000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "E21720F2804EF30440F2B39CD409252C26F58F73" }, { "b" : "7F4C84B24000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "9BC22F9457E3D7E9CF8DDC135C0DAC8F7742135D" }, { "b" : "7F4C8491C000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "B376100CAB1EAC4E5DE066EACFC282BF7C0B54F3" }, { "b" : "7F4C84718000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "67699FFDA9FD2A552032E0652A242E82D65AA10D" }, { "b" : "7F4C84414000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "D0E735DBECD63462DA114BD3F76E6EC7BB1FACCC" }, { "b" : "7F4C8410E000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "EF3F6DFFA1FBE48436EC6F45CD3AABA157064BB4" }, { "b" : "7F4C83EF8000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "36311B4457710AE5578C4BF00791DED7359DBB92" }, { "b" : "7F4C83CDA000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "AF06068681750736E0524DF17D5A86CB2C3F765C" }, { "b" : "7F4C83915000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "5382058B69031CAA9B9996C11061CD164C9398FF" }, { "b" : "7F4C8515F000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "2A816C3EBBA4E12813FBD34B06FBD25BC892A67F" } ] }}
|
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12f3502]
|
mongod(+0xEF2659) [0x12f2659]
|
mongod(+0xEF2E62) [0x12f2e62]
|
libpthread.so.0(+0x10340) [0x7f4c83cea340]
|
libc.so.6(gsignal+0x39) [0x7f4c8394bcc9]
|
libc.so.6(abort+0x148) [0x7f4c8394f0d8]
|
mongod(_ZN5mongo13fassertFailedEi+0x82) [0x127d9d2]
|
mongod(+0xC78EF3) [0x1078ef3]
|
mongod(__wt_eventv+0x40C) [0x1a378ec]
|
mongod(__wt_err+0x8D) [0x1a37a8d]
|
mongod(__wt_panic+0x24) [0x1a37e74]
|
mongod(__wt_bt_read+0x28C) [0x198e89c]
|
mongod(__wt_page_in_func+0x180) [0x1993d90]
|
mongod(+0x15A9E7F) [0x19a9e7f]
|
mongod(__wt_tree_walk+0xCA1) [0x19ac731]
|
mongod(__wt_btcur_next+0x338) [0x1976fd8]
|
mongod(+0x15CD906) [0x19cd906]
|
mongod(_ZN5mongo21WiredTigerRecordStore6Cursor4nextEv+0x2AC) [0x106e86c]
|
mongod(_ZN5mongo14CollectionScan4workEPm+0x968) [0xbc8f58]
|
mongod(_ZN5mongo12PlanExecutor11getNextImplEPNS_11SnapshottedINS_7BSONObjEEEPNS_8RecordIdE+0x275) [0xe0ac85]
|
mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_8RecordIdE+0x39) [0xe0b349]
|
mongod(+0x9C8E22) [0xdc8e22]
|
mongod(_ZN5mongo7getMoreEPNS_16OperationContextEPKcixPbS4_+0x531) [0xdc9521]
|
mongod(_ZN5mongo15receivedGetMoreEPNS_16OperationContextERNS_10DbResponseERNS_7MessageERNS_5CurOpE+0x1A9) [0xcadd09]
|
mongod(_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xE35) [0xcb3fa5]
|
mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE+0xEC) [0x99974c]
|
mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x26D) [0x12a0ebd]
|
libpthread.so.0(+0x8182) [0x7f4c83ce2182]
|
libc.so.6(clone+0x6D) [0x7f4c83a0f47d]
|
----- END BACKTRACE -----
|
|
Generated at Thu Feb 08 04:03:10 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.