[SERVER-31121] WT_CURSOR. insert: encountered an illegal file format or internal value Created: 18/Sep/17  Updated: 06/Dec/22  Resolved: 25/Feb/18

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 3.4.6
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: hancang2000 Assignee: Backlog - Storage Execution Team
Resolution: Cannot Reproduce Votes: 2
Labels: envc, lxc, wtc
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to SERVER-30916 MONGODB3.4VERSION will creash ,only 1... Closed
Assigned Teams:
Storage Execution
Operating System: Linux
Sprint: Storage 2017-11-13, Storage 2017-12-04, Storage 2017-12-18, Storage 2018-01-01
Participants:
Case:

 Description   

2017-09-15T16:02:29.655819+08:00 [conn328] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-09-15T16:02:29.655836+08:00 [conn328] #012#012***aborting after fassert() failure#012#012
2017-09-15T16:02:29.655860+08:00 [conn290] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-09-15T16:02:29.655877+08:00 [conn290] #012#012***aborting after fassert() failure#012#012
2017-09-15T16:02:29.658546+08:00 [conn314] Got signal: 6 (Aborted).#012#012 0x7fe5bf5d8a01 0x7fe5bf5d7c19 0x7fe5bf5d80fd 0x7fe5bd3548d0 0x7fe5bcfd1107 0x7fe5bcfd24e8 0x7fe5be8af673 0x7fe5bf3120c6 0x7fe5bf30be90 0x7fe5bf308a11 0x7fe5bed72443 0x7fe5bed724f1 0x7fe5be91a201 0x7fe5be91ab1d 0x7fe5bf558ac1 0x7fe5bd34d0a4 0x7fe5bd08204d#012----- BEGIN BACKTRACE -----#012{"backtrace":[{"b":"7FE5BE0A8000","o":"1530A01","s":"_ZN5mongo15printStackTraceERSo"},{"b":"7FE5BE0A8000","o":"152FC19"},{"b":"7FE5BE0A8000","o":"15300FD"},{"b":"7FE5BD345000","o":"F8D0"},{"b":"7FE5BCF9C000","o":"35107","s":"gsignal"},{"b":"7FE5BCF9C000","o":"364E8","s":"abort"},{"b":"7FE5BE0A8000","o":"807673","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"7FE5BE0A8000","o":"126A0C6","s":"_ZN5mongo17wtRCToStatus_slowEiPKc"},{"b":"7FE5BE0A8000","o":"1263E90","s":"_ZN5mongo22WiredTigerSessionCache14releaseSessionEPNS_17WiredTigerSessionE"},{"b":"7FE5BE0A8000","o":"1260A11","s":"_ZN5mongo22WiredTigerRecoveryUnitD0Ev"},{"b":"7FE5BE0A8000","o":"CCA443","s":"_ZN5mongo20OperationContextImplD1Ev"},{"b":"7FE5BE0A8000","o":"CCA4F1","s":"_ZN5mongo20OperationContextImplD0Ev"},{"b":"7FE5BE0A8000","o":"872201","s":"_ZN5mongo23ServiceEntryPointMongod12_sessionLoopERKSt10shared_ptrINS_9transport7SessionEE"},{"b":"7FE5BE0A8000","o":"872B1D"},{"b":"7FE5BE0A8000","o":"14B0AC1"},{"b":"7FE5BD345000","o":"80A4"},{"b":"7FE5BCF9C000","o":"E604D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.4.9", "gitVersion" : "876ebee8c7dd0e2d992f36a848ff4dc50ee6603e", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.16.0-4-amd64", "version" : "#1 SMP Debian 3.16.39-1 (2016-12-30)", "machine" : "x86_64" }, "somap" : [ { "b" : "7FE5BE0A8000", "elfType" : 3, "buildId" : "3A944BD801720379F84804C8CAD856004FA153E2" }, { "b" : "7FFF1F3AF000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "856A20F5A861E2D4F656A3D865B4D6158E2D607F" }, { "b" : "7FE5BDC7D000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "2A229233022D1462F5144690C12B0FD92CBD5E9C" }, { "b" : "7FE5BDA79000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "2391B5C1F072DFEB7D8C0BB62B419D4D0BEADC9D" }, { "b" : "7FE5BD778000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "B337AE10C269FFBB088B791A9035B0BF021D3B93" }, { "b" : "7FE5BD562000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "D5FB04F64B3DAEA6D6B68B5E8B9D4D2BC1A6E1FC" }, { "b" : "7FE5BD345000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "79D807D4FC33BA35F385CCE1A8EFEBFA4AB0FECE" }, { "b" : "7FE5BCF9C000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "F93F79E3E47CDA4B7C433D02D321CE401C4A8501" }, { "b" : "7FE5BDE85000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "FAA8E708C782B16D41A876382A9547138790A275" } ] }}#012 mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x7fe5bf5d8a01]#012 mongod(+0x152FC19) [0x7fe5bf5d7c19]#012 mongod(+0x15300FD) [0x7fe5bf5d80fd]#012 libpthread.so.0(+0xF8D0) [0x7fe5bd3548d0]#012 libc.so.6(gsignal+0x37) [0x7fe5bcfd1107]#012 libc.so.6(abort+0x148) [0x7fe5bcfd24e8]#012 mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x7fe5be8af673]#012 mongod(_ZN5mongo17wtRCToStatus_slowEiPKc+0x396) [0x7fe5bf3120c6]#012 mongod(_ZN5mongo22WiredTigerSessionCache14releaseSessionEPNS_17WiredTigerSessionE+0x280) [0x7fe5bf30be90]#012 mongod(_ZN5mongo22WiredTigerRecoveryUnitD0Ev+0x11) [0x7fe5bf308a11]#012 mongod(_ZN5mongo20OperationContextImplD1Ev+0xB3) [0x7fe5bed72443]#012 mongod(_ZN5mongo20OperationContextImplD0Ev+0x11) [0x7fe5bed724f1]#012 mongod(_ZN5mongo23ServiceEntryPointMongod12_sessionLoopERKSt10shared_ptrINS_9transport7SessionEE+0x211) [0x7fe5be91a201]#012 mongod(+0x872B1D) [0x7fe5be91ab1d]#012 mongod(+0x14B0AC1) [0x7fe5bf558ac1]#012 libpthread.so.0(+0x80A4) [0x7fe5bd34d0a4]#012 libc.so.6(clone+0x6D) [0x7fe5bd08204d]#012-----  END BACKTRACE  -----



 Comments   
Comment by Mark Agarunov [ 08/Jan/18 ]

I reproduce on my workstation directly with a 3 member replset on a ext4 filesystem. After launching the replset I ran ycsb which caused a crash on some runs and completed on others. I'll start running it with 3.6.1 to see if it reproduces and will update this when it finishes or crashes.

Comment by Vamsi Boyapati [ 07/Jan/18 ]

mark.agarunov

I tried to reproduce this issue, several times, but not successful yet.
I tried to form cluster with mlaunch and manually as well.
Could you share more details about the environment you were able to reproduce ?
Could you check in one or more recent releases, whether this issue can be reproduced?

Comment by Mark Agarunov [ 17/Oct/17 ]

Hello zhouhancang,

Thank you for providing this information. Using your detailed reproduction steps and ycsb config, I've managed to reproduce this behavior on 3.4.6, interestingly only if replication is enabled. I've set the fixVersion on this ticket to 'Needs Triage' to mark this ticket for further investigation. Updates will be posted on this ticket as they are available.

Thanks,
Mark

Comment by hancang2000 [ 13/Oct/17 ]

HI Mark Agarunov

dev8-144 is physic disk /dev/sdj
dev254-0 is the lvm

we disk monitor data
DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
dev8-144 964.85 105.57 177035.73 183.59 280.42 289.73 1.04 99.98
dev254-0 964.17 105.70 177158.98 183.85 283.68 293.29 1.04 99.98
dev8-144 759.34 93.75 147857.85 194.84 184.63 244.79 1.32 100.02
dev254-0 748.77 93.62 146707.71 196.06 186.58 250.90 1.34 100.01
dev8-144 917.84 142.21 143172.15 156.14 149.72 162.94 1.09 99.99
dev254-0 907.79 142.21 144015.01 158.80 150.86 165.93 1.10 100.03
dev8-144 613.34 335.66 169972.16 277.67 228.68 373.56 1.63 100.01
dev254-0 611.29 337.53 169157.51 277.27 231.02 378.60 1.64 100.03
dev8-144 1528.50 188.93 163912.12 107.36 189.53 124.12 0.65 99.99
dev254-0 1512.00 182.82 163938.44 108.55 191.65 126.87 0.66 100.01
dev8-144 891.36 112.60 166976.40 187.45 171.05 190.12 1.12 100.00
dev254-0 882.19 112.99 167736.39 190.27 172.44 193.59 1.13 100.02
dev8-144 522.78 207.38 156039.22 298.88 281.00 538.79 1.91 99.99
dev254-0 523.10 206.98 156655.37 299.87 285.57 547.23 1.91 99.98
dev8-144 594.75 196.98 155861.90 262.39 241.52 407.50 1.68 100.02
dev254-0 585.93 197.11 155331.45 265.44 245.96 421.24 1.71 100.02
dev8-144 1290.85 221.31 175092.65 135.81 152.52 117.44 0.77 100.00
dev254-0 1274.12 217.80 175247.61 137.71 153.88 120.02 0.78 100.01
dev8-144 485.39 193.25 163666.17 337.59 246.86 504.96 2.06 100.01

Comment by hancang2000 [ 13/Oct/17 ]

HI

Mark Agarunov
For the lxc containers, how is the storage mapped to the container?
lxc independent use a single ssd disk

Are you using storage pools for lxc?
NO

If so, what kind of storage pools are being used? (lvm, zfs, btrfs, etc)
LVM for create backup . Not for pools

Is the drive being passed through directly to the lxc container?
YES

Which filesystem is used on this node?
XFS

Comment by Mark Agarunov [ 12/Oct/17 ]

Hello zhouhancang,

Thank you for providing this information. We are still investigating but have not yet been able to reproduce this behavior. To better diagnose this, I'd like to clarify what the storage layer for these nodes looks like.

  • For the lxc containers, how is the storage mapped to the container?
  • Are you using storage pools for lxc?
  • If so, what kind of storage pools are being used? (lvm, zfs, btrfs, etc)
  • Is the drive being passed through directly to the lxc container?
  • Which filesystem is used on this node?

Thanks,
Mark

Comment by hancang2000 [ 12/Oct/17 ]

HI
Kelsey T Schubert
I am use three lxc container create the rs , the lxc configure use 25G memory ,give a self ssd disk , and 3 core cpu ,and this rs is new ,not any data in this rs.the dbpath no data

mongodb config as fellow

  1. mongodb.conf
  1. Where to store the data.
  1. Note: if you run mongodb as a non-root user (recommended) you may
  2. need to create and set permissions for this directory manually,
  3. e.g., if the parent directory isn't mutable by the mongodb user.
    dbpath=/home/data/mongodb/db

#where to log
#logpath=/home/log/mongodb/mongodb.log
syslog=true
logappend=true

port = 27017

  1. Enables periodic logging of CPU utilization and I/O wait
    cpu = true
  1. Turn on/off security. Off is currently the default
    #noauth = true
    auth = true

keyFile = /home/mongodb/keyfile

  1. Disable the HTTP interface (Defaults to localhost:27018).
    nohttpinterface = true
  1. in replica set configuration, specify the name of the replica set
    replSet = rsname
  1. max connections
    maxConns = 65535

#profile=1
#slowms=1000
smallfiles = true
noprealloc = true
#noMoveParanoia = true
#syncdelay=600
oplogSize=20000
wiredTigerDirectoryForIndexes = true
directoryperdb = true
wiredTigerCacheSizeGB = 16
shardsvr = true

Comment by hancang2000 [ 11/Oct/17 ]

hi Kelsey T Schubert

I am use ycsb test insert mongodb rs will discover this problems

the ycsb config as fellow

# Yahoo! Cloud System Benchmark
# Workload A: Update heavy workload
#   Application example: Session store recording recent actions
#                        
#   Read/update ratio: 50/50
#   Default data size: 1 KB records (10 fields, 100 bytes each, plus key)
#   Request distribution: zipfian
 
#recordcount=100000000
recordcount=100000000
#operationcount=100000000
operationcount=100000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
threadcount = 50
 
readallfields=true
 
readproportion=0
updateproportion=0
scanproportion=0
insertproportion=1
readmodifywriteproportion=0
 
requestdistribution=zipfian
insertorder = hashed
readallfields=true
fieldlength = 200
fieldcount = 10
mongodb.writeConcern = normal
mongodb.database = neteasedb
mongodb.url = mongodb://user:password@ip:27017/admin?replicaSet=rsname

in two machine create fellow thread
/home/data/YCSB-master# ./bin/ycsb run mongodb -s -P workloads/mongodbtest > inserttest34.log 2>&1 &

the complete logs as fellow

2017-10-10T16:28:48.742539+08:00 [conn31] command ycsb.testcoll1 command: insert { insert: "testcoll1", ordered: true, documents: [ { _id: "user5166734213797165635
", field5: BinData(0, 293D2C305F213C527F2E56612C536F2F2270284E77224B7D37323628416F35213E312B7439453B25282026273C355C39385F7F322B2C263C6426337830382E353D6630402B38.
..), field4: BinData(0, 395A633A51293F387226246E3C2A383441212625362434222F427B22467B25263A21352A2C253E25212A324D73372B6C29253E3C4661304B3F254A213053712B23222B446F2
B...), field3: BinData(0, 2E542D31272227552F38306C20586F373A68205671263B2C2F3A663647353B44612B582D284C6D3E527F2123363D306E3D5C7D3B49352E56633E42332530723E326A2F396
636...), field2: BinData(0, 352E7A3B2E62205A772A4773282132244969285779332B2232596F2E4A752B5E3B355F65305229353222333F7A375E37243538255A2B38586F3B3F74395F6B2F2422375
D6B3A...), field9: BinData(0, 3F3B282D3566235A652E583B3259313C376A3734223C3A34333A3C25293422293E3122303122603F332227293E2456213242792F562F37323C255267323426303A7C2
838623F...), field8: BinData(0, 2E3878362F342C3D202922382D3B202C2D3E235C23293F2833286236217424367A2E4825244177365D2B364B633242613A49273C58693B3726202C26303C6433346
A315D2B29...), field7: BinData(0, 265B3F39383A3E412F3128703257613E2968382A623F3B3E3645212055353B252E38366A295A273D21703A5773372D3C38406336527D283724302A323138362D2
A3425292431...), field6: BinData(0, 375F2935222E2A5227393460305A23382920305E272C2C30275A292142632C57273D36703948713B483B2C507B364F2529306C2A21783F4D3338467B383B202
45B693657633A...), field1: BinData(0, 3A297C3F322A3B32782E2D323B5D6732243C3955252D39263B22703E5C7B223A2C3158313022703B517539347427376A21427D263E642A4D7B36542B2C4F3
F202E2E24356222...), field0: BinData(0, 2D5935393F2A2A52753B4221263D603345313525222C20642F472D2D57273B503524542F3434322822242634343138643B363429512F39496F274337293
B762A5729372D3C25...) } ] } ninserted:1 keysInserted:1 numYields:0 reslen:184 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } 
}, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_query 379ms
2017-10-10T16:28:48.748570+08:00 [conn40] WiredTiger error (0) [1507624128:748539][523:0x7efd1dca0700], file:local/collection/4--2637432162362236231.wt, WT_CURSOR.
insert: encountered an illegal file format or internal value
2017-10-10T16:28:48.748596+08:00 [conn40] WiredTiger error (-31804) [1507624128:748584][523:0x7efd1dca0700], file:local/collection/4--2637432162362236231.wt, WT_CU
RSOR.insert: the process must exit and restart: WT_PANIC: WiredTiger library panic
2017-10-10T16:28:48.748630+08:00 [conn40] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2017-10-10T16:28:48.748713+08:00 [conn68] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.748718+08:00 [conn40] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.748728+08:00 [conn68] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.748787+08:00 [conn105] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.748796+08:00 [conn105] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.748998+08:00 [thread3] WiredTiger error (-31804) [1507624128:748967][523:0x7efd3cded700], eviction-server: cache eviction thread error: WT_PANI
C: WiredTiger library panic
2017-10-10T16:28:48.749048+08:00 [thread3] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2017-10-10T16:28:48.749075+08:00 [thread3] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.749154+08:00 [conn60] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.749188+08:00 [conn60] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.749262+08:00 [conn103] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.749296+08:00 [conn103] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.749380+08:00 [conn92] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.749414+08:00 [conn92] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.749483+08:00 [conn31] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.749517+08:00 [conn31] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.749556+08:00 [conn112] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.749586+08:00 [conn112] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.751381+08:00 [thread4] WiredTiger error (-31804) [1507624128:751358][523:0x7efd3ddef700], eviction-server: cache eviction thread error: WT_PANI
C: WiredTiger library panic
2017-10-10T16:28:48.751435+08:00 [thread4] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2017-10-10T16:28:48.751465+08:00 [thread4] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.751663+08:00 [thread5] WiredTiger error (-31804) [1507624128:751645][523:0x7efd3d5ee700], eviction-server: cache eviction thread error: WT_PANI
C: WiredTiger library panic
2017-10-10T16:28:48.751697+08:00 [thread5] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2017-10-10T16:28:48.751723+08:00 [thread5] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.753405+08:00 [thread6] WiredTiger error (-31804) [1507624128:753371][523:0x7efd3edf1700], log-server: log server error: WT_PANIC: WiredTiger li
brary panic
2017-10-10T16:28:48.753421+08:00 [thread6] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 361
2017-10-10T16:28:48.753430+08:00 [thread6] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.753559+08:00 [conn48] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.753766+08:00 [conn48] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.753847+08:00 [conn119] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.753890+08:00 [conn119] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.753918+08:00 [conn36] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.753944+08:00 [conn36] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.753970+08:00 [conn63] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.753994+08:00 [conn63] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754019+08:00 [conn115] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754044+08:00 [conn115] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754125+08:00 [conn33] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754164+08:00 [conn69] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754420+08:00 [conn42] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754469+08:00 [conn137] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754489+08:00 [conn35] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754507+08:00 [conn80] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754523+08:00 [conn114] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754537+08:00 [conn102] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754551+08:00 [conn55] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754566+08:00 [conn53] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754573+08:00 [conn135] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754627+08:00 [conn77] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754647+08:00 [conn67] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754657+08:00 [conn67] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754664+08:00 [conn100] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754673+08:00 [conn100] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754694+08:00 [conn61] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754719+08:00 [conn61] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754745+08:00 [conn99] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754754+08:00 [conn99] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754822+08:00 [conn122] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754870+08:00 [conn121] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754881+08:00 [conn121] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754896+08:00 [conn137] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754907+08:00 [conn117] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754916+08:00 [conn117] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754928+08:00 [conn35] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.754944+08:00 [conn74] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64
2017-10-10T16:28:48.754972+08:00 [conn80] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.755032+08:00 [conn55] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.755069+08:00 [conn114] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.755094+08:00 [conn102] #012#012***aborting after fassert() failure#012#012
2017-10-10T16:28:48.755101+08:00 [conn52] Fatal Assertion 28559 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 64

Comment by Kelsey Schubert [ 29/Sep/17 ]

Hi zhouhancang

We still need additional information to diagnose the problem. If this is still an issue for you, would you please provide the information Mark requested?

Additionally, would please clarify how are you reproducing this issue? Are you running these tests on a clean mongod (e.g. empty $dbpath)?

Thank you,
Kelsey

Comment by Mark Agarunov [ 18/Sep/17 ]

Hello zhouhancang,

Thank you for the report. To get a better idea of what may be causing this behavior, could you please provide the complete logs from the affected mongod node?

Thanks,
Mark

Generated at Thu Feb 08 04:26:03 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.