[SERVER-35590] GlobalWrite during MapReduce: outType: reduce nonAtomic: true Created: 14/Jun/18 Updated: 25/Jul/18 Resolved: 19/Jun/18 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | MapReduce |
| Affects Version/s: | 3.6.5 |
| Fix Version/s: | None |
| Type: | Question | Priority: | Critical - P2 |
| Reporter: | Rui Ribeiro | Assignee: | Asya Kamsky |
| Resolution: | Won't Fix | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Participants: |
| Description |
|
I check the source code in the file mr.cpp and I can see that a Map-Reduce with nonAtomic: true and outType: reduce can generate a Global Lock. In the comment in the code is written: "This must be global because we may write across different databases." My question is how can a map reduce will write across different databases, if it is needed to specified the output database of the map reduce? This Global locks has a lot of influence in the performance of another operations that are I am running (make them slow). Code (mr.cpp):
|
| Comments |
| Comment by Mark [X] [ 25/Jul/18 ] |
|
Hi asya, I am a bit confused as well as to why this is not being fixed. I understand that the goal is to move these types of workloads to the aggregation framework - however, for some workloads, this isn't something that is achievable.
This is a VERY simple fix, and would like an understanding of why the pull request from https://jira.mongodb.org/browse/SERVER-7831 hasn't been pushed into the product (or even reviewed?)
In other words, is there a reason the Global W lock is actually needed when we are not creating a new collection and the atomic flag is set? Thanks, |
| Comment by Rui Ribeiro [ 20/Jun/18 ] |
|
Hi asya As you can see I diddnt create this issue as bug, but as a question. I was asking you why you need to do a global lock in map reduce that is doing a reduce between two collections. |
| Comment by Asya Kamsky [ 19/Jun/18 ] |
|
Closing because we don't plan to make any improvements to MapReduce code but instead will continue working on making aggregation framework be able to handle the same use cases better. |