Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-15012

Server crashes on indexed rooted $or queries using a 2d index

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: 2.6.4, 2.7.5
    • Fix Version/s: 2.6.5, 2.7.6
    • Component/s: Querying
    • Labels:
    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      ALL
    • Backport Completed:

      Description

      Issue Status as of Sep 10, 2014

      ISSUE SUMMARY
      The query optimizer may cache query plans for those query shapes that can have more than one viable plan.

      However, no query data is cached for '2d' indices. When the winning solution for an indexed rooted $or query uses a '2d' index, MongoDB wrongly assumes there is cached data for it, and fails when such cached data is not found.

      This issue only affects indexed rooted $or queries which use a '2d' index. For example, a query like:

      db.foo.find({$or: [{a: {$geoWithin: {$box: [[0,0],[1,1]]}}, b: 1}, {a: {$geoWithin: {$box: [[0,0],[1,1]]}}, b: 1}]})

      fails when there's a '2d' index on a.

      USER IMPACT
      In the specific scenario described above, MongoDB aborts execution and must be restarted.

      WORKAROUNDS
      Rewrite the query so it's not a rooted $or query. For example, for the query above the issue can be avoided by rewriting the query as:

      db.foo.find({b: 1, $or: [{a: {$geoWithin: {$box: [[0,0],[1,1]]}}}, {a: {$geoWithin: {$box: [[0,0],[1,1]]}}}]})

      AFFECTED VERSIONS
      MongoDB production releases up to 2.6.4 are affected by this issue.

      FIX VERSION
      The fix is included in the 2.6.5 production release.

      RESOLUTION DETAILS
      SubplanRunner must check for missing cache data and gracefully fall back on regular planning.

      Original description

      This bug was introduced in 2.7.1 and does not affect 2.4.11 or 2.6.4. The following debug info is from master (c40a73d76).

      Backtrace:

      mongod: src/third_party/boost-1.56.0/boost/smart_ptr/scoped_ptr.hpp:99: T* boost::scoped_ptr<T>::operator->() const [with T = mongo::SolutionCacheData]: Assertion `px != 0' failed.
       
      Program received signal SIGABRT, Aborted.
      [Switching to Thread 0x7ffff7fcd700 (LWP 11572)]
      0x00007ffff6c00f89 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
      56	../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
       
      (gdb) bt
      #0  0x00007ffff6c00f89 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
      #1  0x00007ffff6c04398 in __GI_abort () at abort.c:89
      #2  0x00007ffff6bf9e46 in __assert_fail_base (fmt=0x7ffff6d4b7b8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x1d9c584 "px != 0", 
          file=file@entry=0x1d9c548 "src/third_party/boost-1.56.0/boost/smart_ptr/scoped_ptr.hpp", line=line@entry=99, 
          function=function@entry=0x1d9c760 <boost::scoped_ptr<mongo::SolutionCacheData>::operator->() const::__PRETTY_FUNCTION__> "T* boost::scoped_ptr<T>::operator->() const [with T = mongo::SolutionCacheData]") at assert.c:92
      #3  0x00007ffff6bf9ef2 in __GI___assert_fail (assertion=0x1d9c584 "px != 0", file=0x1d9c548 "src/third_party/boost-1.56.0/boost/smart_ptr/scoped_ptr.hpp", line=99, 
          function=0x1d9c760 <boost::scoped_ptr<mongo::SolutionCacheData>::operator->() const::__PRETTY_FUNCTION__> "T* boost::scoped_ptr<T>::operator->() const [with T = mongo::SolutionCacheData]")
          at assert.c:101
      #4  0x00000000012c2499 in boost::scoped_ptr<mongo::SolutionCacheData>::operator-> (this=0x3670f30) at src/third_party/boost-1.56.0/boost/smart_ptr/scoped_ptr.hpp:99
      #5  0x00000000012c05d3 in mongo::SubplanStage::pickBestPlan (this=0x356c340) at src/mongo/db/exec/subplan.cpp:298
      #6  0x00000000012bf1ca in mongo::SubplanStage::make (txn=0x7ffff7fccb70, collection=0x2e77500, ws=0x2e60850, params=..., cq=0x36957a0, out=0x7ffff7fcbd60) at src/mongo/db/exec/subplan.cpp:94
      #7  0x0000000001454440 in mongo::(anonymous namespace)::prepareExecution (opCtx=0x7ffff7fccb70, collection=0x2e77500, ws=0x2e60850, canonicalQuery=0x36957a0, plannerOptions=0, rootOut=0x7ffff7fcbe50, 
          querySolutionOut=0x7ffff7fcbe58) at src/mongo/db/query/get_executor.cpp:302
      #8  0x0000000001455396 in mongo::getExecutor (txn=0x7ffff7fccb70, collection=0x2e77500, rawCanonicalQuery=0x36957a0, out=0x7ffff7fcc0e0, plannerOptions=0) at src/mongo/db/query/get_executor.cpp:406
      #9  0x000000000146b734 in mongo::newRunQuery (txn=0x7ffff7fccb70, m=..., q=..., curop=..., result=...) at src/mongo/db/query/new_find.cpp:598
      #10 0x000000000134ecfc in mongo::receivedQuery (txn=0x7ffff7fccb70, c=..., dbresponse=..., m=...) at src/mongo/db/instance.cpp:263
      #11 0x000000000134fdf9 in mongo::assembleResponse (txn=0x7ffff7fccb70, m=..., dbresponse=..., remote=...) at src/mongo/db/instance.cpp:437
      #12 0x000000000108a923 in mongo::MyMessageHandler::process (this=0x2e42190, m=..., port=0x2e5f180, le=0x2e5fb80) at src/mongo/db/db.cpp:198
      #13 0x00000000017c46e5 in mongo::PortMessageServer::handleIncomingMsg (arg=0x2e52540) at src/mongo/util/net/message_server_port.cpp:227
      #14 0x00007ffff7bc4182 in start_thread (arg=0x7ffff7fcd700) at pthread_create.c:312
      #15 0x00007ffff6cc538d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
       
       
      (gdb) f 5
      #5  0x00000000012c05d3 in mongo::SubplanStage::pickBestPlan (this=0x356c340) at src/mongo/db/exec/subplan.cpp:298
      298	                if (SolutionCacheData::USE_INDEX_TAGS_SOLN != bestSoln->cacheData->solnType) {
      (gdb) p bestSoln->cacheData
      $1 = {px = 0x0}

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: