[SERVER-17265] thread convoys from WiredTigerRecoveryUnit::registerChange Created: 12/Feb/15  Updated: 26/Sep/15  Resolved: 26/Sep/15

Status: Closed
Project: Core Server
Component/s: Storage, WiredTiger
Affects Version/s: 3.0.0-rc8
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Mark Callaghan Assignee: Unassigned
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates SERVER-15192 Make all logOp listeners rollback-safe Closed
Related
related to SERVER-17250 logOp rollback when legacy insert cre... Closed
Operating System: ALL
Steps To Reproduce:

See the files I uploaded for https://jira.mongodb.org/browse/SERVER-17141

Sprint: Quint 9 09/18/15
Participants:

 Description   

This occurs with the WiredTiger btree with 10 user threads. I see frequent cases where most threads are stalled in pthread_mutex_timedlock which appears to be in g++/STL code. AFAIK, STL has something to cache allocations that the environment variable GLIBCXXX_FORCE_NEW=1 might disable.

Can WiredTiger do something to reduce pressure on the STL allocator from the use of WiredTigerRecoverUnit::_changes which is:
typedef OwnedPointerVector<Change> Changes;
Changes _changes;

I have yet to notice this problem in RocksDB, but maybe pressure on the allocator from other code triggers the problems for WiredTiger.

The common thread stack with 3.0 rc8 is:

__lll_timedlock_wait,pthread_mutex_timedlock,deallocate,deallocate,_M_deallocate,_M_emplace_back_aux<mongo::RecoveryUnit::Change*,push_back,push_back,mongo::WiredTigerRecoveryUnit::registerChange,mongo::WiredTigerRecordStore::_increaseDataSize,mongo::WiredTigerRecordStore::insertRecord,mongo::WiredTigerRecordStore::insertRecord,mongo::Collection::insertDocument,mongo::repl::(anonymous,mongo::repl::logOp,singleInsert,insertOne,mongo::WriteBatchExecutor::execOneInsert,mongo::WriteBatchExecutor::execInserts,mongo::WriteBatchExecutor::bulkExecute,mongo::WriteBatchExecutor::executeBatch,mongo::WriteCmd::run,mongo::_execCommand,mongo::Command::execCommand,mongo::_runCommands,runCommands,mongo::runQuery,receivedQuery,mongo::assembleResponse,mongo::MyMessageHandler::process,mongo::PortMessageServer::handleIncomingMsg,start_thread,clone

#0  0x00007f752d406d8c in __lll_timedlock_wait () from /foo/gcc-4.9-glibc-2.20/lib/libpthread.so.0
#1  0x00007f752d401abc in pthread_mutex_timedlock () from /foo/gcc-4.9-glibc-2.20/lib/libpthread.so.0
#2  0x0000000000f7265e in deallocate (this=0x7f752c585eee <__PRETTY_FUNCTION__.6239+5>, __p=<optimized out>) at /bar/third-party2/gcc/x/4.9.x/centos6-native/1317bc4/include/c++/4.9.x-google/ext/new_allocator.h:116
#3  deallocate (__a=..., __n=<optimized out>, __p=<optimized out>) at /bar/third-party2/gcc/x/4.9.x/centos6-native/1317bc4/include/c++/4.9.x-google/bits/alloc_traits.h:383
#4  _M_deallocate (this=0x7f752c585eee <__PRETTY_FUNCTION__.6239+5>, __n=<optimized out>, __p=<optimized out>) at /bar/third-party2/gcc/x/4.9.x/centos6-native/1317bc4/include/c++/4.9.x-google/bits/stl_vector.h:197
#5  _M_emplace_back_aux<mongo::RecoveryUnit::Change* const&> (this=0x7f752c585eee <__PRETTY_FUNCTION__.6239+5>) at /bar/third-party2/gcc/x/4.9.x/centos6-native/1317bc4/include/c++/4.9.x-google/bits/vector.tcc:475
#6  push_back (__x=<synthetic pointer>, this=0x7f752c585eee <__PRETTY_FUNCTION__.6239+5>) at /bar/third-party2/gcc/x/4.9.x/centos6-native/1317bc4/include/c++/4.9.x-google/bits/stl_vector.h:1049
#7  push_back (ptr=<optimized out>, this=0x7f752c585eee <__PRETTY_FUNCTION__.6239+5>) at src/mongo/base/owned_pointer_vector.h:85
#8  mongo::WiredTigerRecoveryUnit::registerChange (this=0x7f752c585e9e, change=<optimized out>) at src/mongo/db/storage/wiredtiger/wiredtiger_recovery_unit.cpp:176
#9  0x0000000000f6a1f2 in mongo::WiredTigerRecordStore::_increaseDataSize (this=0x7f752bbb5650, txn=<optimized out>, amount=2181) at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp:987
#10 0x0000000000f6ff9e in mongo::WiredTigerRecordStore::insertRecord (this=0x7f752c3ec3b0, txn=0x7f73eba2b600, data=<optimized out>, len=0, enforceQuota=<optimized out>) at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp:587
#11 0x0000000000f694be in mongo::WiredTigerRecordStore::insertRecord (this=0x7f752bbb5600, txn=txn@entry=0x7f752c3ef6c0, doc=doc@entry=0x7f752c3ec4d0, enforceQuota=<optimized out>) at src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp:617
#12 0x0000000000955c1e in mongo::Collection::insertDocument (this=0x7f752b82ea00, txn=txn@entry=0x7f752c3ef6c0, doc=doc@entry=0x7f752c3ec4d0, enforceQuota=enforceQuota@entry=false) at src/mongo/db/catalog/collection.cpp:180
#13 0x0000000000dbb9a5 in mongo::repl::(anonymous namespace)::_logOpOld (txn=0x7f752c3ef6c0, opstr=<optimized out>, ns=<optimized out>, logNS=<optimized out>, obj=..., o2=<optimized out>, bb=0x0, fromMigrate=false) at src/mongo/db/repl/oplog.cpp:336
#14 0x0000000000db8dc9 in mongo::repl::logOp (txn=0x7f752c3ef6c0, opstr=0x17965e9 "i", ns=0x7f751747e568 "iibench.purchases_index", obj=..., patt=0x0, b=0xc, b@entry=0x0, fromMigrate=false) at src/mongo/db/repl/oplog.cpp:380
#15 0x0000000000a6c7ce in singleInsert (result=0x7f752c3ec8c0, collection=<optimized out>, docToInsert=..., txn=0x7f752c3ef6c0) at src/mongo/db/commands/write_commands/batch_executor.cpp:1143
#16 insertOne (result=0x7f752c3ec8c0, state=0x7f752c3edea0) at src/mongo/db/commands/write_commands/batch_executor.cpp:1068
#17 mongo::WriteBatchExecutor::execOneInsert (this=this@entry=0x7f752c3ee360, state=state@entry=0x7f752c3edea0, error=error@entry=0x7f752c3ede80) at src/mongo/db/commands/write_commands/batch_executor.cpp:1109
#18 0x0000000000a6d682 in mongo::WriteBatchExecutor::execInserts (this=this@entry=0x7f752c3ee360, request=..., errors=errors@entry=0x7f752c3ee150) at src/mongo/db/commands/write_commands/batch_executor.cpp:882
#19 0x0000000000a6d784 in mongo::WriteBatchExecutor::bulkExecute (this=this@entry=0x7f752c3ee360, request=..., upsertedIds=upsertedIds@entry=0x7f752c3ee170, errors=errors@entry=0x7f752c3ee150) at src/mongo/db/commands/write_commands/batch_executor.cpp:764
#20 0x0000000000a6df75 in mongo::WriteBatchExecutor::executeBatch (this=this@entry=0x7f752c3ee360, request=..., response=response@entry=0x7f752c3ee3a0) at src/mongo/db/commands/write_commands/batch_executor.cpp:272
#21 0x0000000000a71b57 in mongo::WriteCmd::run (this=<optimized out>, txn=0x7f752c3ef6c0, dbName=..., cmdObj=..., options=<optimized out>, errMsg=..., result=..., fromRepl=false) at src/mongo/db/commands/write_commands/write_commands.cpp:147
#22 0x0000000000a956cc in mongo::_execCommand (txn=txn@entry=0x7f752c3ef6c0, c=c@entry=0x7f752bba0510, dbname=..., cmdObj=..., queryOptions=queryOptions@entry=0, errmsg=..., result=..., fromRepl=false) at src/mongo/db/dbcommands.cpp:1290
#23 0x0000000000a968ea in mongo::Command::execCommand (txn=txn@entry=0x7f752c3ef6c0, c=c@entry=0x7f752bba0510, queryOptions=queryOptions@entry=0, cmdns=cmdns@entry=0x7f7517426814 "iibench.$cmd", cmdObj=..., result=..., fromRepl=false) at src/mongo/db/dbcommands.cpp:1506
#24 0x0000000000a9787d in mongo::_runCommands (txn=0x7f752c3ef6c0, txn@entry=0x7c9e2b <free+379>, ns=0x7f7517426814 "iibench.$cmd", _cmdobj=..., b=..., anObjBuilder=..., fromRepl=fromRepl@entry=false, queryOptions=0) at src/mongo/db/dbcommands.cpp:1578
#25 0x0000000000cedb0a in runCommands (fromRepl=false, queryOptions=<optimized out>, anObjBuilder=..., b=..., curop=..., jsobj=..., ns=<optimized out>, txn=0x7c9e2b <free+379>) at src/mongo/db/query/find.cpp:137
#26 mongo::runQuery (txn=0x7c9e2b <free+379>, txn@entry=0x7f752c3ef6c0, m=..., q=..., nss=..., curop=..., result=..., fromDBDirectClient=false) at src/mongo/db/query/find.cpp:606
#27 0x0000000000bbd408 in receivedQuery (fromDBDirectClient=false, m=..., dbresponse=..., c=..., txn=0x7f752c3ef6c0) at src/mongo/db/instance.cpp:220
#28 mongo::assembleResponse (txn=txn@entry=0x7f752c3ef6c0, m=..., dbresponse=..., remote=..., fromDBDirectClient=fromDBDirectClient@entry=false) at src/mongo/db/instance.cpp:403
#29 0x000000000081add3 in mongo::MyMessageHandler::process (this=<optimized out>, m=..., port=0x7f7521570880, le=0x7f7517422060) at src/mongo/db/db.cpp:206
#30 0x000000000116e85e in mongo::PortMessageServer::handleIncomingMsg (arg=0x7f7521570880) at src/mongo/util/net/message_server_port.cpp:229
#31 0x00007f752d3fe7c9 in start_thread () from /foo/gcc-4.9-glibc-2.20/lib/libpthread.so.0
#32 0x00007f752c4fd8ad in clone () from /foo/gcc-4.9-glibc-2.20/lib/libc.so.6



 Comments   
Comment by Ramon Fernandez Marina [ 26/Sep/15 ]

Hi mdcallag, we believe SERVER-15192 should have addressed this issue, so I'm going to close this ticket. If this behavior appears in your testing again please let us know and we'll reopen the ticket (or feel free to open a new one).

Thanks,
Ramón.

Comment by Ramon Fernandez Marina [ 15/May/15 ]

Hi mdcallag, have you had a chance to re-test with a version that includes SERVER-15192 like 3.1.0 or newer?

Thanks,
Ramón.

Comment by Andy Schwerin [ 22/Mar/15 ]

Backporting the logop rollback safety changes from SERVER-15192 would probably eliminate this problem.

Comment by Mark Callaghan [ 22/Mar/15 ]

Running test for 3.0.1. Problem is still there for zlib and snappy block compressors. I am using gcc 4.9.x compiler. PMP shows the problem call stack as:

__lll_timedlock_wait,pthread_mutex_timedlock,deallocate,deallocate,_M_deallocate,_M_emplace_back_aux<mongo::RecoveryUnit::Change*,push_back,push_back,mongo::WiredTigerRecoveryUnit::registerChange,mongo::WiredTigerRecordStore::_increaseDataSize,mongo::WiredTigerRecordStore::insertRecord,mongo::WiredTigerRecordStore::insertRecord,mongo::Collection::insertDocument,mongo::repl::(anonymous,mongo::repl::logOp,singleInsert,insertOne,mongo::WriteBatchExecutor::execOneInsert,mongo::WriteBatchExecutor::execInserts,mongo::WriteBatchExecutor::bulkExecute,mongo::WriteBatchExecutor::executeBatch,mongo::WriteCmd::run,mongo::_execCommand,mongo::Command::execCommand,mongo::_runCommands,runCommands,mongo::runQuery,receivedQuery,mongo::assembleResponse,mongo::MyMessageHandler::process,mongo::PortMessageServer::handleIncomingMsg,start_thread,clone

Comment by Mark Callaghan [ 17/Feb/15 ]

I get these thread stacks after changing the build from -O3 to -O2 or -O1 and it repeats for tcmalloc and glibc malloc. It doesn't reproduce when -fno-omit-frame-pointer is used with -O2 as I get different stacks with that – still thread convoys, just elsewhere.

Comment by Mark Callaghan [ 17/Feb/15 ]

Given this comment I think this task should be closed. I am sure there is a lot of mutex contention that should be made better but I don't have anything concrete in what I provided here. I will repeat the 4.9.x tests with a lower level of optimization to see if I can get a better stack trace.

Repeated tests for 4.8.1 and:
1) 4.9.x is about 5% faster. But our 4.9.x is 4.9 with patches, not something you can try
2) The common stall with the 4.8.1 toolchain is different, see below. Both are stalls from mutex contention.

__lll_timedlock_wait,_L_timedlock_69,pthread_mutex_timedlock,timed_lock,boost::timed_mutex::timed_lock<boost::date_time::subsecond_duration<boost::posix_time::time_duration,,timed_lock<boost::date_time::subsecond_duration<boost::posix_time::time_duration,,mongo::WiredTigerRecordStore::cappedDeleteAsNeeded,mongo::WiredTigerRecordStore::insertRecord,mongo::WiredTigerRecordStore::insertRecord,mongo::Collection::insertDocument,mongo::repl::(anonymous,mongo::repl::logOp,singleInsert,insertOne,mongo::WriteBatchExecutor::execOneInsert,mongo::WriteBatchExecutor::execInserts,mongo::WriteBatchExecutor::bulkExecute,mongo::WriteBatchExecutor::executeBatch,mongo::WriteCmd::run,mongo::_execCommand,mongo::Command::execCommand,mongo::_runCommands,runCommands,mongo::runQuery,receivedQuery,mongo::assembleResponse,mongo::MyMessageHandler::process,mongo::PortMessageServer::handleIncomingMsg,start_thread,clone

Comment by Mark Callaghan [ 12/Feb/15 ]

Reproduces with tcmalloc. Next I will try tcmalloc and a gcc 4.8.1 toolchain. I have been using 4.9.x.

Comment by Mark Callaghan [ 12/Feb/15 ]

I will repeat the tests using a binary linked with tcmalloc. Thanks for your feedback.

Comment by Andy Schwerin [ 12/Feb/15 ]

The deallocate method of the allocator in new_allocator.h just delegates to operator delete, which in turn delegates to your malloc/free implementation. Do any paths through jemalloc's free() implementation ever take a mutex? The thread-per-connection architecture is pretty cruel to allocators with per-thread caches when the connection count gets even modestly high; we've had to do some tuning on tcmalloc to deal with similar issues.

Comment by Andy Schwerin [ 12/Feb/15 ]

Yeah, my comment about the relation to SERVER-17250 is wrong. While we'll be removing the registration of the "RollbackPreventer", that's not the stack seen in this ticket; my mistake.

Comment by Andrew Morrow (Inactive) [ 12/Feb/15 ]

My understanding (and a brief look through the GCC sources seems to confirm) is that GLIBCXX_FORCE_NEW facility only applies to the extension allocators defined in "ext/mt_allocator.h" and "ext/pool_allocator.h" allocators shipped with libstdc++. The call stack that you have provided shows that you are calling into new_allocator.h, which does not reference GLIBCXX_FORCE_NEW.

Typically, the choice of default allocator is made at GCC configure time, via the --enable-libstdcxx-allocator[=KIND] flag, but it defaults to 'auto', which I believe is the same as 'new'.

Comment by Mark Callaghan [ 12/Feb/15 ]

FYI, I compiled with gcc 4.9, but don't know the minor version:
g++ --version --> g++ (GCC) 4.9.x-google 20140827 (prerelease)

This also uses glibc 2.20 and jemalloc. But the problem stacks don't show jemalloc, so I think this is g++/STL.

I am also far from an expert with modern c++ toolchains, but I am seeking advice from co-workers including the jemalloc author.

Comment by Andy Schwerin [ 12/Feb/15 ]

The solution planned for SERVER-17250 will greatly mitigate this particular symptom, because only writes to the admin.system.roles collection and writes to in-flight migrating chunks will register change listeners. I doubt that will make it into 3.0.0, but ought to be written and tested in time for one of the earlier dot releases.

Comment by Andy Schwerin [ 12/Feb/15 ]

acm , I'm surprised to hear that libstdc++ might still have it's own caching allocator.

Mark, I think new vanilla gcc toolchains don't have a caching allocator inside the stl anymore. Can you find out if your toolchain is special in this regard? Also, are you using the bundled tcmalloc or another malloc implementation?

Comment by Mark Callaghan [ 12/Feb/15 ]

For the data above, I took 23 thread stacks over 15 minutes and 5 of them had this stall

Comment by Mark Callaghan [ 12/Feb/15 ]

The other thread stacks from that example don't show anything interesting. Perhaps the lock holder has moved on and the waiters have yet to wake and run.

      1 sigwait,mongo::(anonymous,boost::(anonymous,start_thread,clone
      1 select,mongo::Listener::initAndListen,_initAndListen,mongo::initAndListen,mongoDbMain,main
      1 recv,mongo::Socket::_recv,mongo::Socket::unsafe_recv,mongo::Socket::recv,mongo::MessagingPort::recv,mongo::PortMessageServer::handleIncomingMsg,start_thread,clone
      1 pthread_cond_wait@@GLIBC_2.3.2,wait<boost::unique_lock<boost::timed_mutex>,mongo::DeadlineMonitor<mongo::V8Scope>::deadlineMonitorThread,boost::(anonymous,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__sweep_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__log_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__log_close_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__evict_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__ckpt_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,timed_wait<boost::unique_lock<boost::timed_mutex>,timed_wait<boost::unique_lock<boost::timed_mutex>,,mongo::RangeDeleter::doWork,boost::(anonymous,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,boost::condition_variable_any::timed_wait<boost::unique_lock<boost::timed_mutex>,timed_wait<boost::unique_lock<boost::timed_mutex>,,timed_wait<boost::unique_lock<boost::timed_mutex>,,mongo::(anonymous,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone
      1 nanosleep,mongo::sleepsecs,mongo::TTLMonitor::run,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone
      1 nanosleep,mongo::sleepsecs,mongo::repl::replMasterThread,boost::(anonymous,start_thread,clone
      1 nanosleep,mongo::sleepsecs,mongo::ClientCursorMonitor::run,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone
      1 __memset_sse2,__wt_realloc,__wt_buf_grow_worker,__wt_buf_grow,__wt_buf_initsize,__wt_bt_read,__wt_cache_read,__wt_page_in_func,__wt_page_swap_func,__wt_tree_walk,__wt_btcur_next,__curfile_next,mongo::WiredTigerRecordStore::cappedDeleteAsNeeded_inlock,_deleteExcessDocuments,mongo::(anonymous,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone

And another example using PoorMansProfiler (PMP) style output, where the lock holder isn't obvious

     10 __lll_timedlock_wait,pthread_mutex_timedlock,deallocate,deallocate,_M_deallocate,_M_emplace_back_aux<mongo::RecoveryUnit::Change*,push_back,push_back,mongo::WiredTigerRecoveryUnit::registerChange,mongo::WiredTigerRecordStore::_increaseDataSize,mongo::WiredTigerRecordStore::insertRecord,mongo::WiredTigerRecordStore::insertRecord,mongo::Collection::insertDocument,mongo::repl::(anonymous,mongo::repl::logOp,singleInsert,insertOne,mongo::WriteBatchExecutor::execOneInsert,mongo::WriteBatchExecutor::execInserts,mongo::WriteBatchExecutor::bulkExecute,mongo::WriteBatchExecutor::executeBatch,mongo::WriteCmd::run,mongo::_execCommand,mongo::Command::execCommand,mongo::_runCommands,runCommands,mongo::runQuery,receivedQuery,mongo::assembleResponse,mongo::MyMessageHandler::process,mongo::PortMessageServer::handleIncomingMsg,start_thread,clone
      1 __wt_page_refp,__wt_tree_walk,__wt_btcur_next,__curfile_next,mongo::WiredTigerRecordStore::cappedDeleteAsNeeded_inlock,_deleteExcessDocuments,mongo::(anonymous,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone
      1 __wt_page_in_func,__wt_page_swap_func,__wt_tree_walk,__evict_walk_file,__evict_walk,__evict_lru_walk,__evict_pass,__evict_server,start_thread,clone
      1 sigwait,mongo::(anonymous,boost::(anonymous,start_thread,clone
      1 select,mongo::Listener::initAndListen,_initAndListen,mongo::initAndListen,mongoDbMain,main
      1 recv,mongo::Socket::_recv,mongo::Socket::unsafe_recv,mongo::Socket::recv,mongo::MessagingPort::recv,mongo::PortMessageServer::handleIncomingMsg,start_thread,clone
      1 pthread_cond_wait@@GLIBC_2.3.2,wait<boost::unique_lock<boost::timed_mutex>,mongo::DeadlineMonitor<mongo::V8Scope>::deadlineMonitorThread,boost::(anonymous,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__sweep_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__log_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__log_close_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,__wt_cond_wait,__ckpt_server,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,timed_wait<boost::unique_lock<boost::timed_mutex>,timed_wait<boost::unique_lock<boost::timed_mutex>,,mongo::RangeDeleter::doWork,boost::(anonymous,start_thread,clone
      1 pthread_cond_timedwait@@GLIBC_2.3.2,boost::condition_variable_any::timed_wait<boost::unique_lock<boost::timed_mutex>,timed_wait<boost::unique_lock<boost::timed_mutex>,,timed_wait<boost::unique_lock<boost::timed_mutex>,,mongo::(anonymous,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone
      1 nanosleep,mongo::sleepsecs,mongo::TTLMonitor::run,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone
      1 nanosleep,mongo::sleepsecs,mongo::repl::replMasterThread,boost::(anonymous,start_thread,clone
      1 nanosleep,mongo::sleepsecs,mongo::ClientCursorMonitor::run,mongo::BackgroundJob::jobBody,boost::(anonymous,start_thread,clone

Generated at Thu Feb 08 03:43:50 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.