|
Author:
{'email': 'david.storch@10gen.com', 'name': 'David Storch', 'username': 'dstorch'}
Message: SERVER-31139 Fix invariant failure caused by text index plan enumeration bug.
(cherry picked from commit 026b5f6f38c883c48d063ac8ee736f7330227281)
Branch: v3.4
https://github.com/mongodb/mongo/commit/bd162a54bae444d280b9624bed2e0dd067f4bc0e
|
|
Author:
{'email': 'david.storch@10gen.com', 'name': 'David Storch', 'username': 'dstorch'}
Message: SERVER-31139 Fix invariant failure caused by text index plan enumeration bug.
Branch: master
https://github.com/mongodb/mongo/commit/026b5f6f38c883c48d063ac8ee736f7330227281
|
|
It looks like the problem is in the PlanEnumerator's logic for handling multikey indexes. Text indexes are considered multikey, since there is an index key for every token in the source text. The multikey analysis code is failing to assign one of the predicates over the leading fields of the index, since in general predicates which share a path prefix cannot both use a multikey index. Some of the details of this behavior are documented here:
https://docs.mongodb.com/manual/core/multikey-index-bounds/#compound-index-on-fields-from-an-array-of-embedded-documents
The prefix fields of a text index cannot contain arrays and thus cannot be multikey. Therefore, the fact that we are applying the multikey analysis logic for this query seems to be the root cause of the problem.
|
|
Thanks for the update Dave. I believe I made the same typo in the original index noted in the issue submission (the getIndexes() call shows the correct index). I may have inadvertently passed that along to you. My apologies if that was the case.
Glad you were able to reproduce the error. We do have a workaround for now by indexing the entire nested document, but it will help to use the keys themselves in the index when it becomes available.
Mike
|
|
Hi mcoleman@oildex.com,
Thanks for the additional information, this was very helpful. There was a misspelling in my repro script which I tracked down, and I am now able to reproduce consistently. I'll post an update here when I have more information about the root cause, but I shouldn't need any more information from you.
This issue affects all stable releases in the 2.6.x, 3.0.x, 3.2.x, and 3.4.x release series. The code with the defect was new in 2.6.0, so that is likely the oldest affected version. Although text indexes existed in older versions, the $search query operator for text was new in 2.6, so this is a "beginning of time" bug as opposed to a regression.
Best,
Dave
|
|
Hi Dave, thanks for the update on this issue. Here are some answers to the above questions:
1. Does this issue reproduce consistently when you run this query, or did you see this crash once and are no longer able to reproduce? We've hit this same error in two different mongod instances. Once in our development environment and once in a sandbox instance. We first hit the error using mongo version 3.4.3 in both cases. I upgraded our sandbox instance to the most recent version 3.4.9 and reproduced the error again. We have tried issuing different search values for that example query, each time with the database crash.
2. If the query no longer causes the crash, would you be able to provide the output of running it with .explain("allPlansExecution")?
I am unable to execute the example query successfully (with and without the explain plan).
3. Could you provide the output of db.SenderProperties.getIndexes()?
[
|
{
|
"v" : 2,
|
"key" : {
|
"_id" : 1
|
},
|
"name" : "_id_",
|
"ns" : "mdb-cdex.SenderProperties"
|
},
|
{
|
"v" : 2,
|
"unique" : true,
|
"key" : {
|
"checksum" : 1
|
},
|
"name" : "checksum_1",
|
"ns" : "mdb-cdex.SenderProperties"
|
},
|
{
|
"v" : 2,
|
"key" : {
|
"senderOildexId.companyId" : 1,
|
"senderOildexId.businessUnit" : 1,
|
"receiverOildexId.companyId" : 1,
|
"receiverOildexId.businessUnit" : 1,
|
"_fts" : "text",
|
"_ftsx" : 1
|
},
|
"name" : "sender_property_name_number_text",
|
"ns" : "mdb-cdex.SenderProperties",
|
"default_language" : "none",
|
"weights" : {
|
"propertyName" : 1,
|
"propertyNumber" : 1
|
},
|
"language_override" : "language",
|
"textIndexVersion" : 3
|
}
|
]
|
4. Have you configured any command line parameters or setParameters that might be relevant here, especially any configuration having to do with the MongoDB query system?
No setParameters have been used in relation to this error.
5. Any other hints about how to reproduce this issue or relevant environment/configuration details?
We have only been able to produce the error through the standard mongo terminal connection, and through the 3TStudio query tool. I will attempt some additional tests today with various explain plans and let you know if I am able to get a successful return on the query.
|
|
Hi mcoleman@oildex.com,
The strack trace above indicates that an invariant() check in the planning code for $text queries was tripped. The error happened in the module that generates candidate query plans based on the query and the set of available indices. Therefore, it seems like the kind of error that 1) does not depend on the contents of the collection, and 2) should reproduce easily given the same set of indices and the same query. However, I could not reproduce with the following script:
(function() {
|
"use strict";
|
|
db.SenderProperties.drop();
|
|
assert.commandWorked(db.SenderProperties.createIndex({
|
"senderOildexId.companyId": 1,
|
"senderOildexId.businessUnit": 1,
|
"receiverOildexId.companyId": 1,
|
"receiverOildexId.businessUnit": 1,
|
propertyNumber: "text",
|
properyName: "text"
|
}, {default_language: "none", name: "test"}));
|
|
assert.writeOK(db.SenderProperties.insert({
|
"_id": ObjectId("59bc4bdd77c83ac81ca5b784"),
|
"senderOildexId": {"companyId": 22375, "businessUnit": 0},
|
"receiverOildexId": {"companyId": 25, "businessUnit": 0},
|
"propertyNumber": "123456789",
|
"propertyName": "J L BOND 12345",
|
"checksum": "74c65685e49ffd7ef479961b237f73e1"
|
}));
|
|
db.SenderProperties.find({
|
"senderOildexId.companyId": 22375,
|
"senderOildexId.businessUnit": 0,
|
"receiverOildexId.companyId": 25,
|
"receiverOildexId.businessUnit": 0,
|
"$text": {"$search": "J L BOND"}
|
}).itcount();
|
}());
|
In order to help me diagnose this issue, could you respond to the following:
- Does this issue reproduce consistently when you run this query, or did you see this crash once and are no longer able to reproduce?
- If the query no longer causes the crash, would you be able to provide the output of running it with .explain("allPlansExecution")? You can learn more about explain in this documentation page.
- Could you provide the output of db.SenderProperties.getIndexes()?
- Have you configured any command line parameters or setParameters that might be relevant here, especially any configuration having to do with the MongoDB query system?
- Any other hints about how to reproduce this issue or relevant environment/configuration details?
In the meantime, I will use the stack trace above to see if I can spot anything suspect in the code. Thanks in advance for your help on this!
Best,
Dave
|
|
The backtrace in the attached logs came from the 3.4.4 RHEL 6.2 build. I was able to parse the stack trace to get line numbers using these debug symbols: http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel62-debugsymbols-3.4.4.tgz.
server31139 $ cat stacktrace.log | python /home/dstorch/mongo/buildscripts/mongosymb.py --symbolizer-path=/usr/bin/llvm-symbolizer-3.5 mongodb-linux-x86_64-rhel62-3.4.4/mongod.debug
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/util/stacktrace_posix.cpp:172:0: mongo::printStackTrace(std::ostream&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/util/signal_handlers_synchronous.cpp:178:0: mongo::(anonymous namespace)::printSignalAndBacktrace(int)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/util/signal_handlers_synchronous.cpp:235:0: mongo::(anonymous namespace)::abruptQuit(int)
|
??:0:0: ??
|
??:0:0: ??
|
??:0:0: ??
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/util/assert_util.cpp:154:0: mongo::invariantFailed(char const*, char const*, unsigned int)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/planner_access.cpp:466:0: mongo::QueryPlannerAccess::finishTextNode(mongo::QuerySolutionNode*, mongo::IndexEntry const&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/planner_access.cpp:566:0: mongo::QueryPlannerAccess::finishLeafNode(mongo::QuerySolutionNode*, mongo::IndexEntry const&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/planner_access.cpp:530:0: mongo::QueryPlannerAccess::finishAndOutputLeaf(mongo::QueryPlannerAccess::ScanBuildingState*, std::vector<mongo::QuerySolutionNode*, std::allocator<mongo::QuerySolutionNode*> >*)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/planner_access.cpp:800:0: mongo::QueryPlannerAccess::processIndexScans(mongo::CanonicalQuery const&, mongo::MatchExpression*, bool, std::vector<mongo::IndexEntry, std::allocator<mongo::IndexEntry> > const&, mongo::QueryPlannerParams const&, std::vector<mongo::QuerySolutionNode*, std::allocator<mongo::QuerySolutionNode*> >*)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/planner_access.cpp:967:0: mongo::QueryPlannerAccess::buildIndexedAnd(mongo::CanonicalQuery const&, mongo::MatchExpression*, bool, std::vector<mongo::IndexEntry, std::allocator<mongo::IndexEntry> > const&, mongo::QueryPlannerParams const&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/planner_access.cpp:1158:0: mongo::QueryPlannerAccess::buildIndexedDataAccess(mongo::CanonicalQuery const&, mongo::MatchExpression*, bool, std::vector<mongo::IndexEntry, std::allocator<mongo::IndexEntry> > const&, mongo::QueryPlannerParams const&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/query_planner.cpp:802:0: mongo::QueryPlanner::plan(mongo::CanonicalQuery const&, mongo::QueryPlannerParams const&, std::vector<mongo::QuerySolution*, std::allocator<mongo::QuerySolution*> >*)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/get_executor.cpp:377:0: mongo::(anonymous namespace)::prepareExecution(mongo::OperationContext*, mongo::Collection*, mongo::WorkingSet*, std::unique_ptr<mongo::CanonicalQuery, std::default_delete<mongo::CanonicalQuery> >, unsigned long) [clone .constprop.502]
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/get_executor.cpp:467:0: mongo::getExecutor(mongo::OperationContext*, mongo::Collection*, std::unique_ptr<mongo::CanonicalQuery, std::default_delete<mongo::CanonicalQuery> >, mongo::PlanExecutor::YieldPolicy, unsigned long)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/query/get_executor.cpp:628:0: mongo::getExecutorFind(mongo::OperationContext*, mongo::Collection*, mongo::NamespaceString const&, std::unique_ptr<mongo::CanonicalQuery, std::default_delete<mongo::CanonicalQuery> >, mongo::PlanExecutor::YieldPolicy)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/commands/find_cmd.cpp:324:0: mongo::FindCmd::run(mongo::OperationContext*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, mongo::BSONObj&, int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, mongo::BSONObjBuilder&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/commands/dbcommands.cpp:1524:0: mongo::Command::run(mongo::OperationContext*, mongo::rpc::RequestInterface const&, mongo::rpc::ReplyBuilderInterface*)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/commands/dbcommands.cpp:1438:0: mongo::Command::execCommand(mongo::OperationContext*, mongo::Command*, mongo::rpc::RequestInterface const&, mongo::rpc::ReplyBuilderInterface*)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/run_commands.cpp:73:0: mongo::runCommands(mongo::OperationContext*, mongo::rpc::RequestInterface const&, mongo::rpc::ReplyBuilderInterface*)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/instance.cpp:234:0: mongo::(anonymous namespace)::receivedCommand(mongo::OperationContext*, mongo::NamespaceString const&, mongo::Client&, mongo::DbResponse&, mongo::Message&) [clone .isra.110] [clone .constprop.124]
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/instance.cpp:613:0: mongo::assembleResponse(mongo::OperationContext*, mongo::Message&, mongo::DbResponse&, mongo::HostAndPort const&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/service_entry_point_mongod.cpp:135:0: mongo::ServiceEntryPointMongod::_sessionLoop(std::shared_ptr<mongo::transport::Session> const&)
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/db/service_entry_point_mongod.cpp:103:0: operator()
|
/opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:1871:0: std::_Function_handler<void (std::shared_ptr<mongo::transport::Session> const&), mongo::ServiceEntryPointMongod::startSession(std::shared_ptr<mongo::transport::Session>)::{lambda(std::shared_ptr<mongo::transport::Session> const&)#1}>::_M_invoke(std::_Any_data const&, std::shared_ptr<mongo::transport::Session> const&)
|
/opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:2267:0: std::function<void (std::shared_ptr<mongo::transport::Session> const&)>::operator()(std::shared_ptr<mongo::transport::Session> const&) const
|
/data/mci/b9c1c92514a49d7446ed28ea59697136/src/src/mongo/transport/service_entry_point_utils.cpp:78:0: mongo::(anonymous namespace)::runFunc(void*)
|
??:0:0: ??
|
??:0:0: ??
|
|
|
Hi mcoleman@oildex.com,
Thank you for reporting this issue; we're investigating and will update this ticket when we know more.
Kind regards,
Kelsey
|
Generated at Thu Feb 08 04:26:07 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.