TL;DR Map/reduce is slower than Aggregation framework.
I have listed benchmarks of both for calculating simple aggregate values in their purest view in my recent blog post. Aggregation framework there proved to be about 10x faster than m/r on a collection of 1M documents.
I have also compared current Aggregable implementation that uses m/r with my proposed one, and I have seen 3-10x speed increase.
(As there is no DSL for aggregation currently and as
MONGOID-2720 wasn't merged yet, this implementation handles this job itself. It's still worth merging in as it improves performance.)