Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-14123

some operations can create BSON object larger than the 16MB limit

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.6.4, 2.7.4
    • Component/s: Querying
    • Labels:
      None
    • Operating System:
      ALL
    • Backport Completed:
    • Steps To Reproduce:
      Hide

      Install 2.6.1 and run the following script:

      db.createCollection("orders");
       
      var ids = [];
      // On my system 215295 is the highest number that works
      for (var i = 0; i < 215500; ++i) {
        ids.push(new ObjectId());
      }
      db.orders.find({_id: {$in: ids}}).toArray();

      Show
      Install 2.6.1 and run the following script: db.createCollection( "orders" );   var ids = []; // On my system 215295 is the highest number that works for ( var i = 0; i < 215500; ++i) { ids.push( new ObjectId()); } db.orders.find({_id: {$ in : ids}}).toArray();
    • Linked BF Score:
      0

      Description

      Issue Status as of Jul 22, 2014

      ISSUE SUMMARY
      MongoDB collects statistics about every operation. These statistics are converted into a BSON format for presentation to the user in the following scenarios:

      In these scenarios, MongoDB fails to check that the BSON format doesn't grow beyond the 16MB limit. This can be the case for large query predicates, such as queries that contain an $in with thousands of elements.

      USER IMPACT
      When statistics grow beyond 16MB, queries fail with an error message and an assertion error is printed on the logs.

      WORKAROUNDS
      N/A

      AFFECTED VERSIONS
      MongoDB 2.6 production releases up to 2.6.3 are affected by this issue.

      FIX VERSION
      The fix is included in the 2.6.4 production release.

      RESOLUTION DETAILS
      Keep the amount of statistics returned to the user within the 16MB limit for BSON objects, and add a warning message when these statistics are truncated.

      Original description

      Running a query for a large number of _ids results in error code 10334 due to a BSON object larger than the maximum size. The error is not in the query object itself (which is ~4MB in the example) but internally from what appears to be the query planner. Such a query generates the following message in the log:

      2014-05-30T22:01:54.013-0600 [conn11] Assertion: 10334:BSONObj size: 17805128 (0x10FAF48) is invalid. Size must be between 0 and 16793600(16MB) First element: type: "FETCH"
      2014-05-30T22:01:54.026-0600 [conn11] test.trades 0x11c0e91 0x1163109 0x11477e6 0x1147d3c 0x76d23b 0xd16f1a 0xd18092 0xda2043 0xd4cb1c 0xb97322 0xb99902 0x76b6af 0x117720b 0x7f7e2bc4f062 0x7f7e2af56c1d 
       ./mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11c0e91]
       ./mongod(_ZN5mongo10logContextEPKc+0x159) [0x1163109]
       ./mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11477e6]
       ./mongod() [0x1147d3c]
       ./mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x41b) [0x76d23b]
       ./mongod() [0xd16f1a]
       ./mongod(_ZN5mongo11explainPlanERKNS_14PlanStageStatsEPPNS_11TypeExplainEb+0x12) [0xd18092]
       ./mongod(_ZNK5mongo20SingleSolutionRunner7getInfoEPPNS_11TypeExplainEPPNS_8PlanInfoE+0x53) [0xda2043]
       ./mongod(_ZN5mongo11newRunQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x133c) [0xd4cb1c]
       ./mongod() [0xb97322]
       ./mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x442) [0xb99902]
       ./mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x9f) [0x76b6af]
       ./mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117720b]
       /lib/x86_64-linux-gnu/libpthread.so.0(+0x8062) [0x7f7e2bc4f062]
       /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f7e2af56c1d]

      The issue is partially that a valid query object significantly below the 16MB limit is rejected, but to me the issue is primarily that this appears to be an arbitrary query limit which can't be determined a priori and is likely dependent on implementation details that may change between versions.

      Is there a way to determine if such queries will run (for particular server versions or for all versions), or Is the only solution for large queries to try them and cut them in half (whatever that means for queries on multiple keys) whenever they don't work?

      Updating the documentation to make a note of this limitation could also be useful to others in a similar situation in the future.

      Thanks for considering,
      Kevin

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                13 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: