[DOCS-12641] Docs for SERVER-40321: Rolling back a prepared transaction on a capped collection leads to an invariant failure Created: 18/Apr/19  Updated: 13/Nov/23  Resolved: 16/Jul/19

Status: Closed
Project: Documentation
Component/s: manual
Affects Version/s: None
Fix Version/s: 4.1.11, Server_Docs_20231030, Server_Docs_20231106, Server_Docs_20231105, Server_Docs_20231113

Type: Task Priority: Major - P3
Reporter: Robert Justice (Inactive) Assignee: Kay Kim (Inactive)
Resolution: Duplicate Votes: 0
Labels: prepare_durability, rbfz, txn_storage
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Documented
documents SERVER-40321 Rolling back a prepared transaction o... Closed
Duplicate
duplicates DOCS-12701 Docs for SERVER-40684: Ban transactio... Closed
Related
is related to DOCS-12919 Investigate changes in SERVER-42372: ... Closed
Participants:
Days since reply: 4 years, 30 weeks, 1 day ago
Epic Link: DOCS: 4.2 Server/Tools
Story Points: 0.25

 Description   

Description

Documentation Request Summary:

Transactions on shard servers cannot operate on capped collections. Transactions on replica sets, however, that are not part of a sharded cluster, can still operate on capped collections.

Engineering Ticket Description:

Inserting a document into a capped collection acquires a RESOURCE_METADATA lock.

if (_needCappedLock) {
    // X-lock the metadata resource for this capped collection until the end of the WUOW. This
    // prevents the primary from executing with more concurrency than secondaries.
    // See SERVER-21646.
    Lock::ResourceLock heldUntilEndOfWUOW{
        opCtx->lockState(), ResourceId(RESOURCE_METADATA, _ns.ns()), MODE_X};
}

However, the invariant in LockerImpl::saveLockStateAndUnlock() claims that resource metadata locks never need to be saved.

// We should never have to save and restore metadata locks.
invariant(RESOURCE_DATABASE == resId.getType() || RESOURCE_COLLECTION == resId.getType() ||
          (RESOURCE_GLOBAL == resId.getType() && isSharedLockMode(it->mode)) ||
          (resourceIdReplicationStateTransitionLock == resId && it->mode == MODE_IX));


Thread 34 "conn2" received signal SIGTRAP, Trace/breakpoint trap.
[Switching to Thread 0x7f7917065700 (LWP 32518)]
0x00007f792b992727 in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
(gdb) bt
#0  0x00007f792b992727 in raise () from /lib/x86_64-linux-gnu/libpthread.so.0
#1  0x0000556d7486172b in mongo::breakpoint () at src/mongo/util/debugger.cpp:75
#2  0x0000556d72b0b177 in mongo::invariantFailed (expr=expr@entry=0x556d749e8a38 "RESOURCE_DATABASE == resId.getType() || RESOURCE_COLLECTION == resId.getType() || (RESOURCE_GLOBAL == resId.getType() && isSharedLockMode(it->mode)) || (resourceIdReplicationStateTransitionLock == res"..., file=file@entry=0x556d749e8670 "src/mongo/db/concurrency/lock_state.cpp", line=line@entry=716) at src/mongo/util/assert_util.cpp:102
#3  0x0000556d72adf972 in mongo::invariantWithLocation<bool> (testOK=<optimized out>, line=716, file=0x556d749e8670 "src/mongo/db/concurrency/lock_state.cpp", expr=0x556d749e8a38 "RESOURCE_DATABASE == resId.getType() || RESOURCE_COLLECTION == resId.getType() || (RESOURCE_GLOBAL == resId.getType() && isSharedLockMode(it->mode)) || (resourceIdReplicationStateTransitionLock == res"...) at src/mongo/util/invariant.h:64
#4  mongo::LockerImpl::saveLockStateAndUnlock (this=0x556d7c293b00, stateOut=0x556d7c28ea20) at src/mongo/db/concurrency/lock_state.cpp:714
#5  0x0000556d73c73695 in mongo::TransactionParticipant::TxnResources::TxnResources (this=0x7f7917062500, wl=..., opCtx=0x556d7cedf180, stashStyle=<optimized out>) at /opt/mongodbtoolchain/stow/gcc-v3.zr9/include/c++/8.2.0/bits/unique_ptr.h:342
#6  0x0000556d73c74d11 in mongo::TransactionParticipant::Participant::refreshLocksForPreparedTransaction (this=this@entry=0x7f79170625c8, opCtx=opCtx@entry=0x556d7cedf180, yieldLocks=yieldLocks@entry=true) at src/mongo/util/concurrency/with_lock.h:72
#7  0x0000556d7333dedf in mongo::<lambda(mongo::OperationContext*, const SessionToKill&)>::operator() (__closure=<optimized out>, session=..., killerOpCtx=0x556d7cedf180) at src/mongo/db/kill_sessions_local.cpp:195
#8  std::_Function_handler<void(mongo::OperationContext*, const mongo::SessionCatalog::SessionToKill&), mongo::yieldLocksForPreparedTransactions(mongo::OperationContext*)::<lambda(mongo::OperationContext*, const SessionToKill&)> >::_M_invoke(const std::_Any_data &, mongo::OperationContext *&&, const mongo::SessionCatalog::SessionToKill &) (__functor=..., __args#0=<optimized out>, __args#1=...) at /opt/mongodbtoolchain/stow/gcc-v3.zr9/include/c++/8.2.0/bits/std_function.h:297
#9  0x0000556d7333e3b1 in std::function<void (mongo::OperationContext*, mongo::SessionCatalog::SessionToKill const&)>::operator()(mongo::OperationContext*, mongo::SessionCatalog::SessionToKill const&) const (__args#1=..., __args#0=<optimized out>, this=0x7f7917062a00) at /opt/mongodbtoolchain/stow/gcc-v3.zr9/include/c++/8.2.0/bits/std_function.h:682
#10 mongo::(anonymous namespace)::killSessionsAction(mongo::OperationContext *, const mongo::SessionKiller::Matcher &, const std::function<bool(const mongo::ObservableSession&)> &, const std::function<void(mongo::OperationContext*, const mongo::SessionCatalog::SessionToKill&)> &, mongo::ErrorCodes::Error) (opCtx=0x556d7cedf180, matcher=..., filterFn=..., killSessionFn=..., reason=<optimized out>) at src/mongo/db/kill_sessions_local.cpp:80
#11 0x0000556d7333f20c in mongo::yieldLocksForPreparedTransactions (opCtx=<optimized out>) at /opt/mongodbtoolchain/stow/gcc-v3.zr9/include/c++/8.2.0/bits/unique_ptr.h:342
#12 0x0000556d72ecebb0 in mongo::repl::ReplicationCoordinatorImpl::stepDown (this=0x556d786ee680, opCtx=<optimized out>, force=<optimized out>, waitTime=..., stepdownTime=...) at src/mongo/db/repl/replication_coordinator_impl.cpp:2032
#13 0x0000556d72e89e6e in mongo::repl::CmdReplSetStepDown::run (this=<optimized out>, opCtx=0x556d7cf46180, cmdObj=..., result=...) at src/mongo/util/duration.h:227
#14 0x0000556d740c9294 in mongo::BasicCommand::Invocation::run (this=0x556d78600840, opCtx=0x556d7cf46180, result=<optimized out>) at src/mongo/db/commands.cpp:592
#15 0x0000556d72ff7032 in mongo::(anonymous namespace)::runCommandImpl (sessionOptions=..., extraFieldsBuilder=0x7f79170632c0, behaviors=..., startOperationTime=..., replyBuilder=0x556d7cf105d0, request=..., invocation=<optimized out>, opCtx=<optimized out>) at src/mongo/db/service_entry_point_common.cpp:479
#16 mongo::(anonymous namespace)::execCommandDatabase (opCtx=<optimized out>, command=0x556d75822840 <mongo::repl::cmdReplSetStepDown>, request=..., replyBuilder=<optimized out>, behaviors=...) at src/mongo/db/service_entry_point_common.cpp:818
#17 0x0000556d72ff7ebe in mongo::(anonymous namespace)::<lambda()>::operator()(void) const (__closure=0x7f7917063ce0) at /opt/mongodbtoolchain/stow/gcc-v3.zr9/include/c++/8.2.0/bits/unique_ptr.h:342
#18 0x0000556d72ff8790 in mongo::(anonymous namespace)::receivedCommands (behaviors=..., message=..., opCtx=<optimized out>) at src/mongo/db/service_entry_point_common.cpp:905
#19 mongo::ServiceEntryPointCommon::handleRequest (opCtx=0x556d7cf46180, m=..., behaviors=...) at src/mongo/db/service_entry_point_common.cpp:1249
#20 0x0000556d72fe755c in mongo::ServiceEntryPointMongod::handleRequest (this=<optimized out>, opCtx=<optimized out>, m=...) at src/mongo/db/service_entry_point_common.h:59
...

Scope of changes

Impact to Other Docs

MVP (Work and Date)

Resources (Scope or Design Docs, Invision, etc.)



 Comments   
Comment by Kay Kim (Inactive) [ 16/Jul/19 ]

Work was done as part of DOCS-12701

Comment by Dianna Hohensee (Inactive) [ 16/Jul/19 ]

I think the story goes that I did SERVER-40321 to ban capped collections from sharded transactions, at the time allowing capped collection in replica set transactions. Then I spun off a ticket to test capped collections in replica set transactions, SERVER-40684, which instead ended up banning capped collections in replica set transactions – all MongoDB transactions, technically. It looks like DOCS-12701, associated with SERVER-40684, is already complete. So I agree, this DOCS ticket is now a no-op.

Generated at Thu Feb 08 08:05:45 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.