[SERVER-16348] Assertion failure n >= 0 && n < static_cast<int>(_files.size()) src/mongo/db/storage/extent_manager.cpp 109 Created: 28/Nov/14  Updated: 07/Apr/23  Resolved: 18/May/15

Status: Closed
Project: Core Server
Component/s: Index Maintenance
Affects Version/s: 2.6.9
Fix Version/s: 2.6.11

Type: Bug Priority: Critical - P2
Reporter: Glen Miner Assignee: Eric Milkie
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
is duplicated by SERVER-17927 Dropping collection during active bac... Closed
Related
Backwards Compatibility: Minor Change
Steps To Reproduce:

If I had to guess this might be two different clients doing an background ensureIndex on the same collection.

We updated to 2.6.6 from 2.4 a few weeks ago – the code in question has been running faithfully for about a year. We tend to do this a lot (create transient collections and delete them weekly with lazy ensureIndex on demand) so I'm terrified about this causing stability problems.

Sprint: RPL 4 06/05/15
Participants:

 Description   

We just had 2/3 of the servers in our cluster crash and and then continue to crash when restarted. The log says:

2014-11-28T15:30:01.479-0500 [repl writer worker 6]      added index to empty collection
2014-11-28T15:30:01.486-0500 [repl writer worker 6] warning: newExtent 1427 scanned
2014-11-28T15:30:01.503-0500 [repl index builder 933] build index on: lotus_stats.temp.InfestedEventScoreT1 properties: { v: 1, unique: true, key: { r: 1.0 }, name: "r_1", ns: "lotus_stats.temp.InfestedEventScoreT1", sparse: false, background: true }
2014-11-28T15:30:01.503-0500 [repl index builder 933]    building index in background
2014-11-28T15:30:01.504-0500 [repl index builder 933] build index done.  scanned 297 total records. 0.001 secs
2014-11-28T15:30:01.533-0500 [repl writer worker 7] build index on: lotus_stats.temp.InfestedEventScoreT2 properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "lotus_stats.temp.InfestedEventScoreT2" }
2014-11-28T15:30:01.534-0500 [repl writer worker 7]      added index to empty collection
2014-11-28T15:30:01.535-0500 [repl writer worker 7] warning: newExtent 1427 scanned
2014-11-28T15:30:01.552-0500 [repl index builder 934] build index on: lotus_stats.temp.InfestedEventScoreT2 properties: { v: 1, unique: true, key: { r: 1.0 }, name: "r_1", ns: "lotus_stats.temp.InfestedEventScoreT2", sparse: false, background: true }
2014-11-28T15:30:01.552-0500 [repl index builder 934]    building index in background
2014-11-28T15:30:01.552-0500 [repl writer worker 7] halting index build: { r: 1.0 }
2014-11-28T15:30:01.566-0500 [repl writer worker 7] halted 1 index build(s)
2014-11-28T15:30:01.576-0500 [repl writer worker 7] uh oh: 36864
2014-11-28T15:30:01.590-0500 [repl writer worker 7] lotus_stats.archived.InfestedEventScoreT2 Assertion failure n >= 0 && n < static_cast<int>(_files.size()) src/mongo/db/storage/extent_manager.cpp 109
2014-11-28T15:30:01.634-0500 [repl writer worker 7] lotus_stats.archived.InfestedEventScoreT2 0x11e9b11 0x118b849 0x116fb5e 0xefb319 0xefb7dd 0xf00739 0x8cf83d 0x9bdcff 0xa2939a 0xa2b151 0xa2c9a6 0xe547d3 0xeb8d9e 0xeb96b0 0x117f0ee 0x122e4a9 0x7fdd0b04fe9a 0x7fdd0a36331d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11e9b11]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x118b849]
 /usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0x17e) [0x116fb5e]
 /usr/bin/mongod(_ZNK5mongo13ExtentManager12_getOpenFileEi+0xc9) [0xefb319]
 /usr/bin/mongod(_ZNK5mongo13ExtentManager9recordForERKNS_7DiskLocE+0x1d) [0xefb7dd]
 /usr/bin/mongod(_ZNK5mongo7DiskLoc3objEv+0x19) [0xf00739]
 /usr/bin/mongod(_ZN5mongo8Database16renameCollectionERKNS_10StringDataES3_b+0x9cd) [0x8cf83d]
 /usr/bin/mongod(_ZN5mongo19CmdRenameCollection3runERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x38cf) [0x9bdcff]
 /usr/bin/mongod(_ZN5mongo12_execCommandEPNS_7CommandERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x3a) [0xa2939a]
 /usr/bin/mongod(_ZN5mongo7Command11execCommandEPS0_RNS_6ClientEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0x19b1) [0xa2b151]
 /usr/bin/mongod(_ZN5mongo12_runCommandsEPKcRNS_7BSONObjERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi+0x6c6) [0xa2c9a6]
 /usr/bin/mongod(_ZN5mongo21applyOperation_inlockERKNS_7BSONObjEbb+0x973) [0xe547d3]
 /usr/bin/mongod(_ZN5mongo7replset8SyncTail9syncApplyERKNS_7BSONObjEb+0x4fe) [0xeb8d9e]
 /usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x50) [0xeb96b0]
 /usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x19e) [0x117f0ee]
 /usr/bin/mongod() [0x122e4a9]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7fdd0b04fe9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fdd0a36331d]
2014-11-28T15:30:01.665-0500 [repl writer worker 7] warning: repl Failed command { renameCollection: "lotus_stats.temp.InfestedEventScoreT2", to: "lotus_stats.archived.InfestedEventScoreT2", dropTarget: true } on admin with status UnknownError exception: assertion src/mongo/db/storage/extent_manager.cpp:109 during oplog application
2014-11-28T15:30:01.666-0500 [repl index builder 934] index build failed. spec: { v: 1, unique: true, key: { r: 1.0 }, name: "r_1", ns: "lotus_stats.temp.InfestedEventScoreT2", sparse: false, background: true } error: 11601 operation was interrupted
2014-11-28T15:30:01.666-0500 [repl index builder 934] lotus_stats.temp.InfestedEventScoreT2 Fatal Assertion 17204
2014-11-28T15:30:01.670-0500 [repl index builder 934] lotus_stats.temp.InfestedEventScoreT2 0x11e9b11 0x118b849 0x116e37d 0x8e387f 0x8e56ee 0xb8b93f 0xb8c498 0x1171532 0x122e4a9 0x7fdd0b04fe9a 0x7fdd0a36331d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11e9b11]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x118b849]
 /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xcd) [0x116e37d]
 /usr/bin/mongod(_ZN5mongo12IndexCatalog15IndexBuildBlock4failEv+0x14f) [0x8e387f]
 /usr/bin/mongod(_ZN5mongo12IndexCatalog11createIndexENS_7BSONObjEbNS0_16ShutdownBehaviorE+0xa5e) [0x8e56ee]
 /usr/bin/mongod(_ZNK5mongo12IndexBuilder5buildERNS_6Client7ContextE+0x54f) [0xb8b93f]
 /usr/bin/mongod(_ZN5mongo12IndexBuilder3runEv+0x728) [0xb8c498]
 /usr/bin/mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0xd2) [0x1171532]
 /usr/bin/mongod() [0x122e4a9]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7fdd0b04fe9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fdd0a36331d]
2014-11-28T15:30:01.670-0500 [repl index builder 934]
 
***aborting after fassert() failure

uname -a
Linux war-stats1 3.13.0-39-generic #66~precise1-Ubuntu SMP Wed Oct 29 09:56:49 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux



 Comments   
Comment by Eric Milkie [ 18/May/15 ]

With this fix, renaming a collection with concurrent background indexes is no longer possible (it will return an error).

Comment by Githook User [ 18/May/15 ]

Author:

{u'username': u'milkie', u'name': u'Eric Milkie', u'email': u'milkie@10gen.com'}

Message: SERVER-16348 prohibit renaming a collection with bg indexes in progress
Branch: v2.6
https://github.com/mongodb/mongo/commit/a029932768cdc12dd86a0afee4a7411065230c5a

Comment by Githook User [ 02/Mar/15 ]

Author:

{u'username': u'kaloianm', u'name': u'Kaloian Manassiev', u'email': u'kaloian.manassiev@mongodb.com'}

Message: Revert "SERVER-16348 renameCollection should skip in-progress index builds"

This reverts commit 221e9a82b87e4f3297b4b057820c90820bf0d009.
Branch: v2.6
https://github.com/mongodb/mongo/commit/66032f9ec00a0f16cbb2ad0565548c6b1a564099

Comment by Githook User [ 02/Mar/15 ]

Author:

{u'username': u'kaloianm', u'name': u'Kaloian Manassiev', u'email': u'kaloian.manassiev@mongodb.com'}

Message: SERVER-16348 renameCollection should skip in-progress index builds
Branch: v2.6
https://github.com/mongodb/mongo/commit/221e9a82b87e4f3297b4b057820c90820bf0d009

Comment by Eric Milkie [ 11/Feb/15 ]

The crash is different on 3.1.0-pre:

2015-02-11T08:08:58.430-0500 I INDEX    [repl index builder 1] build index on: test.c properties: { v: 1, key: { b: 1.0, i: 1.0 }, name: "b_1_i_1", ns: "test.c", background: true }
2015-02-11T08:08:58.707-0500 I INDEX    [repl index builder 2] build index on: test.c properties: { v: 1, key: { i: 1.0 }, name: "i_1", ns: "test.c", background: true }
2015-02-11T08:09:00.574-0500 I INDEX    [repl writer worker 15] allocating new ns file /media/DATA2/data/m/ent/mongo/data/replset/rs2/db/admin.ns, filling with zeroes...
2015-02-11T08:09:00.654-0500 I STORAGE  [FileAllocator] allocating new datafile /media/DATA2/data/m/ent/mongo/data/replset/rs2/db/admin.0, filling with zeroes...
2015-02-11T08:09:00.655-0500 I STORAGE  [FileAllocator] done allocating datafile /media/DATA2/data/m/ent/mongo/data/replset/rs2/db/admin.0, size: 64MB,  took 0 secs
2015-02-11T08:09:00.658-0500 I INDEX    [repl writer worker 15] halting index build: { i: 1.0 }
2015-02-11T08:09:00.658-0500 I INDEX    [repl writer worker 15] halting index build: { b: 1.0, i: 1.0 }
2015-02-11T08:09:00.658-0500 I INDEX    [repl writer worker 15] halted 2 index build(s)
2015-02-11T08:09:00.658-0500 I INDEX    [repl writer worker 15] found 2 index(es) that wasn't finished before shutdown
2015-02-11T08:09:00.659-0500 I -        [repl index builder 1] Fatal Assertion 28554
2015-02-11T08:09:00.662-0500 I CONTROL  [repl index builder 1] 
 mongod(_ZN5mongo15printStackTraceERSo+0x29) [0x107a2a9]
 mongod(_ZN5mongo10logContextEPKc+0x105) [0x1019b95]
 mongod(_ZN5mongo13fassertFailedEi+0xDA) [0x10063da]
 mongod(_ZNK5mongo12IndexBuilder6_buildEPNS_16OperationContextEPNS_8DatabaseEbPNS_4Lock6DBLockE+0x65F) [0xaf1a1f]
 mongod(_ZN5mongo12IndexBuilder3runEv+0x2EC) [0xaf0b5c]
 mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0x142) [0x1007f32]
 mongod(+0xCB8BFC) [0x10b8bfc]
 libpthread.so.0(+0x7EE5) [0x7f23d3c22ee5]
 libc.so.6(clone+0x6D) [0x7f23d322bb8d]
-----  END BACKTRACE  -----

Comment by Glen Miner [ 02/Dec/14 ]

I can confirm we are doing rename, and background indexing, and queries would be hitting the rename target the whole while. It seems like this is likely the same problem.

Comment by Glen Miner [ 02/Dec/14 ]

The full database is about 13GB – I may have problems with permission, though, since some of the data may be sensitive.

Comment by Asya Kamsky [ 01/Dec/14 ]

I realized this might be a variant of this bug https://jira.mongodb.org/browse/SERVER-11716

smashed over the live collection

You are using renameCollection - when it has background indexes, it ends up that rename is not atomic the way it would be if the indexes were built in the foreground, so I can see if you had other processes trying to do something else to this collection at the same time, it could cause a bad interaction.

I'd like to confirm that this is the case though before marking this as duplicate of that bug (and adding the details over there).

Comment by Asya Kamsky [ 01/Dec/14 ]

Definitely we would need the actual malfunctioning DB files - i.e. what's in the dbpath directory.

How large are they?

Comment by Glen Miner [ 01/Dec/14 ]

I've since resurrected the cluster and rewritten the script so that we don't use temp – the indexes are the same, though: (we rename db.temp.* to db.archived.* when finished).

The data is extremely tame so I don't see why I can't send it to you – I just mongoexported the two collections involved and it's just over 10MB after gz. I can SFTP or attach here by private comment – whatever is easiest for you. I'm not sure if there's some nuance lost in export so maybe dump or full snapshot is required – either way I can try to help set you up with a repro.

rsWarStats-test:PRIMARY> db.archived.InfestedEventScoreT1.getIndexes()
[
        {
                "v" : 1,
                "key" : {
                        "_id" : 1
                },
                "name" : "_id_",
                "ns" : "lotus_stats.archived.InfestedEventScoreT1"
        },
        {
                "v" : 1,
                "key" : {
                        "r" : 1
                },
                "name" : "r_1",
                "ns" : "lotus_stats.archived.InfestedEventScoreT1"
        }
]

All documents in these collections are uniform and look like this

rsWarStats-test:PRIMARY> db.archived.InfestedEventScoreT1.findOne()
{
        "_id" : ObjectId("5091638516827f1e0f000000"),
        "s" : NumberLong(209983),
        "n" : "Uniframe",
        "r" : 1
}

(it's clan leaderboard data for a game – score, name and rank).

Comment by Asya Kamsky [ 30/Nov/14 ]

Would it be possible to provide the stats to db.xxx.getIndexes() from the affected collection?

One thing we could do, if you can share the data files from one of the nodes that is crashing/cannot restart (maybe from test cluster?) we can provide you a secure site to scp the data to. It might significantly speed up triaging the problem.

Comment by Glen Miner [ 29/Nov/14 ]

I copied some of the data involved to a test cluster and crashed it too. I have rewritten things to avoid using a temp collection – it isn't as atomic sadly but it's better than crashing!

Comment by Glen Miner [ 29/Nov/14 ]

New theory: a cron job was smashing the collection in question throughout the long replication; we periodically generated a replacement collection (lotus_stats.temp.InfestedEventScoreT1), indexed it, and then smashed over the live collection

    var indexOptions = { unique: true, sparse: false, background: true };
 
    tempCollection.ensureIndex({ r : 1 }, indexOptions);
    e = db.getLastError(); if(e) { print(e); quit(1); }
 
    if(!__quiet)
    {
        print("Moving " + tempCollectionName + " => " + outputCollectionName);
    }
 
    tempCollection.renameCollection(outputCollectionName, true);
    e = db.getLastError(); if(e) { print(e); quit(1); }

I'm guessing this probably happened at least a dozen times during the replication.

I've disabled this cron and am mongodump'ing / remove()ing a bunch of data I can do without to speed up replication so hopefully the next test cycle is only a few hrs.

Comment by Glen Miner [ 28/Nov/14 ]

We just resync'd one of the replicas from scratch for 4 hours and it died again.

2014-11-28T18:30:03.321-0500 [repl writer worker 11] build index on: lotus_stats.temp.InfestedEventScoreT1 properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "lotus_stats.temp.InfestedEventScoreT1" }
2014-11-28T18:30:03.321-0500 [repl writer worker 11]     added index to empty collection
2014-11-28T18:30:03.422-0500 [repl index builder 0] build index on: lotus_stats.temp.InfestedEventScoreT1 properties: { v: 1, unique: true, key: { r: 1.0 }, name: "r_1", ns: "lotus_stats.temp.InfestedEventScoreT1", sparse: false, background: true }
2014-11-28T18:30:03.422-0500 [repl index builder 0]      building index in background
2014-11-28T18:30:03.440-0500 [repl writer worker 2] halting index build: { r: 1.0 }
2014-11-28T18:30:03.440-0500 [repl writer worker 2] halted 1 index build(s)
2014-11-28T18:30:03.453-0500 [repl writer worker 2] uh oh: 524288
2014-11-28T18:30:03.453-0500 [repl writer worker 2] lotus_stats.archived.InfestedEventScoreT1 Assertion failure n >= 0 && n < static_cast<int>(_files.size()) src/mongo/db/storage/extent_manager.cpp 109
2014-11-28T18:30:03.493-0500 [repl writer worker 2] lotus_stats.archived.InfestedEventScoreT1 0x11e9b11 0x118b849 0x116fb5e 0xefb319 0xefb7dd 0xf00739 0x8cf83d 0x9bdcff 0xa2939a 0xa2b151 0xa2c9a6 0xe547d3 0xeb8d9e 0xeb96b0 0x117f0ee 0x122e4a9 0x7f316c0b5e9a 0x7f316b3c931d 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11e9b11]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x118b849]
 /usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0x17e) [0x116fb5e]
 /usr/bin/mongod(_ZNK5mongo13ExtentManager12_getOpenFileEi+0xc9) [0xefb319]
 /usr/bin/mongod(_ZNK5mongo13ExtentManager9recordForERKNS_7DiskLocE+0x1d) [0xefb7dd]
 /usr/bin/mongod(_ZNK5mongo7DiskLoc3objEv+0x19) [0xf00739]
 /usr/bin/mongod(_ZN5mongo8Database16renameCollectionERKNS_10StringDataES3_b+0x9cd) [0x8cf83d]
 /usr/bin/mongod(_ZN5mongo19CmdRenameCollection3runERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x38cf) [0x9bdcff]
 /usr/bin/mongod(_ZN5mongo12_execCommandEPNS_7CommandERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x3a) [0xa2939a]
 /usr/bin/mongod(_ZN5mongo7Command11execCommandEPS0_RNS_6ClientEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0x19b1) [0xa2b151]
 /usr/bin/mongod(_ZN5mongo12_runCommandsEPKcRNS_7BSONObjERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi+0x6c6) [0xa2c9a6]
 /usr/bin/mongod(_ZN5mongo21applyOperation_inlockERKNS_7BSONObjEbb+0x973) [0xe547d3]
 /usr/bin/mongod(_ZN5mongo7replset8SyncTail9syncApplyERKNS_7BSONObjEb+0x4fe) [0xeb8d9e]
 /usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x50) [0xeb96b0]
 /usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x19e) [0x117f0ee]
 /usr/bin/mongod() [0x122e4a9]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f316c0b5e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f316b3c931d]
2014-11-28T18:30:03.511-0500 [repl writer worker 2] warning: repl Failed command { renameCollection: "lotus_stats.temp.InfestedEventScoreT1", to: "lotus_stats.archived.InfestedEventScoreT1", dropTarget: true } on admin with status UnknownError exception: assertion src/mongo/db/storage/extent_manager.cpp:109 during oplog application
2014-11-28T18:30:03.532-0500 [repl index builder 0] index build failed. spec: { v: 1, unique: true, key: { r: 1.0 }, name: "r_1", ns: "lotus_stats.temp.InfestedEventScoreT1", sparse: false, background: true } error: 11601 operation was interrupted
2014-11-28T18:30:03.532-0500 [repl index builder 0] lotus_stats.temp.InfestedEventScoreT1 Fatal Assertion 17204
2014-11-28T18:30:03.537-0500 [repl index builder 0] lotus_stats.temp.InfestedEventScoreT1 0x11e9b11 0x118b849 0x116e37d 0x8e387f 0x8e56ee 0xb8b93f 0xb8c498 0x1171532 0x122e4a9 0x7f316c0b5e9a 0x7f316b3c931d 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11e9b11]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x118b849]
 /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xcd) [0x116e37d]
 /usr/bin/mongod(_ZN5mongo12IndexCatalog15IndexBuildBlock4failEv+0x14f) [0x8e387f]
 /usr/bin/mongod(_ZN5mongo12IndexCatalog11createIndexENS_7BSONObjEbNS0_16ShutdownBehaviorE+0xa5e) [0x8e56ee]
 /usr/bin/mongod(_ZNK5mongo12IndexBuilder5buildERNS_6Client7ContextE+0x54f) [0xb8b93f]
 /usr/bin/mongod(_ZN5mongo12IndexBuilder3runEv+0x728) [0xb8c498]
 /usr/bin/mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0xd2) [0x1171532]
 /usr/bin/mongod() [0x122e4a9]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f316c0b5e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f316b3c931d]
2014-11-28T18:30:03.537-0500 [repl index builder 0] 
 
***aborting after fassert() failure
 
 
2014-11-28T18:30:03.542-0500 [repl index builder 0] SEVERE: Got signal: 6 (Aborted).
Backtrace:0x11e9b11 0x11e8eee 0x7f316b30b150 0x7f316b30b0d5 0x7f316b30e83b 0x116e3ea 0x8e387f 0x8e56ee 0xb8b93f 0xb8c498 0x1171532 0x122e4a9 0x7f316c0b5e9a 0x7f316b3c931d 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11e9b11]
 /usr/bin/mongod() [0x11e8eee]
 /lib/x86_64-linux-gnu/libc.so.6(+0x36150) [0x7f316b30b150]
 /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7f316b30b0d5]
 /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b) [0x7f316b30e83b]
 /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0x13a) [0x116e3ea]
 /usr/bin/mongod(_ZN5mongo12IndexCatalog15IndexBuildBlock4failEv+0x14f) [0x8e387f]
 /usr/bin/mongod(_ZN5mongo12IndexCatalog11createIndexENS_7BSONObjEbNS0_16ShutdownBehaviorE+0xa5e) [0x8e56ee]
 /usr/bin/mongod(_ZNK5mongo12IndexBuilder5buildERNS_6Client7ContextE+0x54f) [0xb8b93f]
 /usr/bin/mongod(_ZN5mongo12IndexBuilder3runEv+0x728) [0xb8c498]
 /usr/bin/mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0xd2) [0x1171532]
 /usr/bin/mongod() [0x122e4a9]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f316c0b5e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f316b3c931d]

Generated at Thu Feb 08 03:40:45 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.