[SERVER-7037] Error in M/R -- 'value too large to reduce' Created: 13/Sep/12 Updated: 26/Aug/17 Resolved: 11/Apr/13 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | MapReduce |
| Affects Version/s: | 2.0.0, 2.2.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Mete Dizioglu | Assignee: | Tad Marshall |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Operating System: | ALL |
| Participants: |
| Description |
|
When running a map/reduce job on a collection containing a large number of entity for a single key in the map/reduce operation, the computation fails because of the size of the object fed into the reduce. Code:
Additional information: |
| Comments |
| Comment by Jörg Rech [ 26/Aug/17 ] |
|
When running a M/R job on a large dataset I run in the same problem. As this issue is still open I will comment here and not raise another issue. We run a deduplication M/R job on 594.356.796 documents in a MongoDB cluster as well as a single instance (both installations have the same data): — "} "} logs - from one shard: numYields:0 reslen:117 locks:{ Global: { acquireCount: { r: 1437654548, w: 854234067, W: 3 }}, Database: { acquireCount: { r: 285961588, w: 854234064, R: 5748651, W: 6 }}, Collection: { acquireCount: { r: 285961588, w: 570719347 }}, Metadata: { acquireCount: { w: 283514720 }}, oplog: { acquireCount: { w: 283514720 }} } protocol:op_query 111ms — com.mongodb.CommandFailureException: { "serverUsed" : "localhost:27017" , "ok" : 0.0 , "errmsg" : "Converting from JavaScript to BSON failed: Object size 16894234 exceeds limit of 16793600 bytes." , "code" : 17260 , "codeName" : "Location17260"}logs: Is there any way of increasing a setting such that the MR job works? |
| Comment by Tad Marshall [ 13/Sep/12 ] |
|
Would you be able to attach a log file showing the error? |