Reproducible segfault with aggregation framework query

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Done
    • Priority: Major - P3
    • None
    • Affects Version/s: 2.1.0
    • Component/s: Aggregation Framework
    • None
    • Environment:
      Ubuntu 12.04, Linux 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
    • Linux
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None

      I was playing around with aggregation framework and was able to reproduce segfaults with the following aggregate() query:

      db.foo.drop();
      
      db.foo.insert({_id: 1, pages:[{widgets:[{id:"w1"},{id:"w2"}]},{widgets:[{id:"w3"},{id:"w4"}]}]});
      
      db.foo.find({},{"pages.widgets.id":1}).pretty();
      
      db.foo.aggregate([
      {$match: {_id: 1}},
      {$project: { _id: 0, pages: 1 }},
      {$unwind: "$pages"},
      {$project: { pages: "$pages.widgets"}}
      ]);
      

      After this executes, the shell immediately drops the connection and I see the following in the mongod log:

      Tue Jun 26 17:18:50 [conn5] boost assertion failure px != 0 T* boost::intrusive_ptr<T>::operator->() const [with T = const mongo::Value] /opt/extra/include/
      boost/smart_ptr/intrusive_ptr.hpp 166
      Tue Jun 26 17:18:50 Invalid access at address: 0xc from thread: conn5
      
      Tue Jun 26 17:18:50 Got signal: 11 (Segmentation fault).
      
      Tue Jun 26 17:18:50 Backtrace:
      0x51f764 0x51fdc2 0x7f83e7be2cb0 0x83840f 0x5ff0f3 0x984063 0x83a22d 0x9453f6 0x946956 0x948427 0x85f8a5 0x861db4 0x606be0 0x60ddd8 0x540866 0x83b07c 0x7f83
      e7bdae9a 0x7f83e70f84bd 
       /usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x3d4) [0x51f764]
       /usr/bin/mongod(_ZN5mongo24abruptQuitWithAddrSignalEiP7siginfoPv+0x262) [0x51fdc2]
       /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x7f83e7be2cb0]
       /usr/bin/mongod(_ZN5mongo8Document8addFieldERKSsRKN5boost13intrusive_ptrIKNS_5ValueEEE+0x3f) [0x83840f]
       /usr/bin/mongod(_ZN5mongo21DocumentSourceProject10getCurrentEv+0xb3) [0x5ff0f3]
       /usr/bin/mongod(_ZN5mongo8Pipeline3runERNS_14BSONObjBuilderERSsRKN5boost13intrusive_ptrINS_14DocumentSourceEEE+0x583) [0x984063]
       /usr/bin/mongod(_ZN5mongo15PipelineCommand3runERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x8d) [0x83a22d]
       /usr/bin/mongod(_ZN5mongo12_execCommandEPNS_7CommandERKSsRNS_7BSONObjEiRNS_14BS
      ONObjBuilderEb+0x56) [0x9453f6]
       /usr/bin/mongod(_ZN5mongo11execCommandEPNS_7CommandERNS_6ClientEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0x806) [0x946956]
       /usr/bin/mongod(_ZN5mongo12_runCommandsEPKcRNS_7BSONObjERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi+0x707) [0x948427]
       /usr/bin/mongod(_ZN5mongo11runCommandsEPKcRNS_7BSONObjERNS_5CurOpERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi+0x35) [0x85f8a5]
       /usr/bin/mongod(_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x974) [0x861db4]
       /usr/bin/mongod() [0x606be0]
       /usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x308) [0x60ddd8]
       /usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x76) [0x540866]
       /usr/bin/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x27c) [0x83b07c]
       /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f83e7bdae9a]
       /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f83e70f84bd]
      
      Logstream::get called in uninitialized state
      Tue Jun 26 17:18:50 [conn5] ERROR: Client::~Client _context should be null but is not; client:conn
      Logstream::get called in uninitialized state
      Tue Jun 26 17:18:50 [conn5] ERROR: Client::shutdown not called: conn
      

            Assignee:
            Unassigned
            Reporter:
            Jeremy Mikola
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: