When a CachedBSONObj is set to a BSONObj that is larger than the CachedBSONObj's fixed size buffer, the BSONObj is not copied and is instead flagged as being "tooBig". This causes the cached object to return the fixed value of
This is what shows up for large queries in currentOp(), profiling, and in 2.6, the logfiles.
Unfortunately, this is not particularly useful when large queries are causing performance problems. It also wastes space, because the CachedBSONObj's fixed size buffer is still present but unused.
Much better would be to copy as much of the BSONObj as possible into the CachedBSONObj's buffer.
For example, even a naive algorithm which walked the BSONObj's fields and memcpyed them one at a time until there's not enough space left in the buffer (at which point, copy in a suitable "$msg: "query truncated" field or similar). Even better would be something that can do this recursively, diving inside arrays and sub-documents and copying them (partially if necessary) until the buffer has been exhausted. Neither of these ought to add very much in the way of overheads. The pathological case would be an object with very many tiny fields, and this could be dealt with by capping the number of fields copied to, say, 100.