Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-12431

Positional projection on $or query causes server to segfault

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 2.5.5
    • Component/s: Querying
    • Labels:
    • ALL
    • Hide

      db.foo.find({$or:[

      {"a":1}

      ]},

      {"a.$":1,"b":1}

      )

      Show
      db.foo.find({$or:[ {"a":1} ]}, {"a.$":1,"b":1} )

      In master (git hash f564c31f4e00d53158e7dd26a7ccf013478761ea), the following query causes the server to segfault:

      db.foo.find({$or:[{"a":1}]}, {"a.$":1,"b":1} )
      

      Stack trace:

      Process 19098 launched: './mongod' (x86_64)
      ./mongod --help for help and startup options
      2014-01-22T08:45:51.654-0800 [initandlisten] MongoDB starting : pid=19098 port=27017 dbpath=/data/db 64-bit host=fifiteener.local
      2014-01-22T08:45:51.654-0800 [initandlisten] 
      2014-01-22T08:45:51.654-0800 [initandlisten] ** NOTE: This is a development version (2.5.5-pre-) of MongoDB.
      2014-01-22T08:45:51.654-0800 [initandlisten] **       Not recommended for production.
      2014-01-22T08:45:51.654-0800 [initandlisten] 
      2014-01-22T08:45:51.654-0800 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
      2014-01-22T08:45:51.654-0800 [initandlisten] 
      2014-01-22T08:45:51.654-0800 [initandlisten] db version v2.5.5-pre-
      2014-01-22T08:45:51.654-0800 [initandlisten] git version: f564c31f4e00d53158e7dd26a7ccf013478761ea
      2014-01-22T08:45:51.654-0800 [initandlisten] build info: Darwin fifiteener.local 13.0.2 Darwin Kernel Version 13.0.2: Sun Sep 29 19:38:57 PDT 2013; root:xnu-2422.75.4~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49
      2014-01-22T08:45:51.654-0800 [initandlisten] allocator: tcmalloc
      2014-01-22T08:45:51.654-0800 [initandlisten] options: {}
      2014-01-22T08:45:51.655-0800 [initandlisten] journal dir=/data/db/journal
      2014-01-22T08:45:51.655-0800 [initandlisten] recover begin
      2014-01-22T08:45:51.656-0800 [initandlisten] recover lsn: 0
      2014-01-22T08:45:51.656-0800 [initandlisten] recover /data/db/journal/j._0
      2014-01-22T08:45:51.658-0800 [initandlisten] recover cleaning up
      2014-01-22T08:45:51.658-0800 [initandlisten] removeJournalFiles
      2014-01-22T08:45:51.659-0800 [initandlisten] recover done
      2014-01-22T08:45:51.671-0800 [initandlisten] waiting for connections on port 27017
      2014-01-22T08:45:53.572-0800 [initandlisten] connection accepted from 127.0.0.1:53214 #1 (1 connection now open)
      Process 19098 stopped
      * thread #2: tid = 0x9ab8f, 0x000000010040e5d6 mongod`mongo::ParsedProjection::_hasPositionalOperatorMatch(query=0x000000010415d140, matchfield=0x000000010481aa88) + 38 at parsed_projection.cpp:282, stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
          frame #0: 0x000000010040e5d6 mongod`mongo::ParsedProjection::_hasPositionalOperatorMatch(query=0x000000010415d140, matchfield=0x000000010481aa88) + 38 at parsed_projection.cpp:282
         279 	    bool ParsedProjection::_hasPositionalOperatorMatch(const MatchExpression* const query,
         280 	                                                       const std::string& matchfield) {
         281 	        if (query->isLogical()) {
      -> 282 	            for (unsigned int i = 0; i < query->numChildren(); ++i) {
         283 	                if (_hasPositionalOperatorMatch(query->getChild(i), matchfield)) {
         284 	                    return true;
         285 	                }
      (lldb) bt
      * thread #2: tid = 0x9ab8f, 0x000000010040e5d6 mongod`mongo::ParsedProjection::_hasPositionalOperatorMatch(query=0x000000010415d140, matchfield=0x000000010481aa88) + 38 at parsed_projection.cpp:282, stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
          frame #0: 0x000000010040e5d6 mongod`mongo::ParsedProjection::_hasPositionalOperatorMatch(query=0x000000010415d140, matchfield=0x000000010481aa88) + 38 at parsed_projection.cpp:282
          frame #1: 0x000000010040c4f5 mongod`mongo::ParsedProjection::make(spec=0x0000000104048300, query=0x000000010415d140, out=0x000000010481adf0) + 1269 at parsed_projection.cpp:222
          frame #2: 0x00000001003ec15e mongod`mongo::CanonicalQuery::init(this=0x000000010415bb20, lpq=<unavailable>) + 286 at canonical_query.cpp:437
          frame #3: 0x00000001003ebec9 mongod`mongo::CanonicalQuery::canonicalize(qm=<unavailable>, out=0x000000010481b1f8) + 137 at canonical_query.cpp:193
          frame #4: 0x00000001004093ee mongod`mongo::newRunQuery(m=<unavailable>, q=<unavailable>, curop=0x0000000104062c00, result=0x000000010405c210) + 1358 at new_find.cpp:387
          frame #5: 0x00000001002be10b mongod`mongo::assembleResponse(mongo::Message&, mongo::DbResponse&, mongo::HostAndPort const&) [inlined] mongo::receivedQuery(this=0x0000000100778557, isArray=false, full=false) + 195 at instance.cpp:265
          frame #6: 0x00000001002be048 mongod`mongo::assembleResponse(m=<unavailable>, dbresponse=0x000000010481bb50, remote=0x000000010481bb00) + 1464 at instance.cpp:428
          frame #7: 0x000000010000efc7 mongod`mongo::MyMessageHandler::process(this=<unavailable>, m=0x000000010481bd28, port=0x000000010401acd0, le=0x000000010401bbd0) + 183 at db.cpp:201
          frame #8: 0x00000001006e98b1 mongod`mongo::PortMessageServer::handleIncomingMsg(arg=0x000000010416d860) + 913 at message_server_port.cpp:209
          frame #9: 0x00000001007680b1 mongod`thread_proxy(param=<unavailable>) + 177 at thread.cpp:121
          frame #10: 0x00007fff86eca899 libsystem_pthread.dylib`_pthread_body + 138
          frame #11: 0x00007fff86eca72a libsystem_pthread.dylib`_pthread_start + 137
          frame #12: 0x00007fff86ecefc9 libsystem_pthread.dylib`thread_start + 13
      

            Assignee:
            benety.goh@mongodb.com Benety Goh
            Reporter:
            benjamin.becker Ben Becker
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: