|
I use command like with following cmd:
db.books.aggregate([{$match: {'$and': [{'did': {'$in': ["85BA6D291638EBA514F7650740BF7059"]}}, {"pid" : {'$in':["38A062302D4411D28E71006008960167","0730F68F4B8B4B52AA23F0AAB46F3CA8"]}}]}}, {$graphLookup: {from: "books", startWith:"$depn",connectFromField:"depn",connectToField:"did",as:"uses", depthField:"depth", restrictSearchWithMatch: {"pid" : {'$in':["38A062302D4411D28E71006008960167","0730F68F4B8B4B52AA23F0AAB46F3CA8"] }}}},{'$unwind': {path: '$uses'}}, {'$group': {_id:
{id: '$uses.did', tp:'$uses.tp',pid: '$uses.pid',pFid: '$uses.pFid', depth: '$uses.depth'}
}}, {'$group': {_id: '$_id.depth', count:
{ $sum: 1 }
}}]) .allowDiskUse(true)
Though allowDiskUse(true) is used, but I still got error code =40099"errmsg" : "$graphLookup reached maximum memory consumption". (ps:The document that searched from is about 15mb's json data.), could someone tell how to leverage disk when the data is huge? There's an associated issue SERVER-38560 logged that $project could not used in each level's $graphlookup
|