We've done some benchmarking, yes.
We're running zfs on linux 0.6.2 on RHEL6 and this setup is very new here. Throughput, with an I/O bound workload, is near-on identical to a replicaset backed by ext4. Though this should be taken with a pinch of salt as:
- the application we're running is fairly data intensive, so we already do in-app compression with lz4
- The databases are very new and the traffic is mostly write only
1) Gives gives us end-to-end I/O saving including network traffic and memory load on the mongodb servers. With 2 it's not clear how, or whether, performance will degrade as the DB ages and ZFS's COW nature causes leads to fragmentation. Given MongoDB's nature is essentially random read I/O anyway, I'm hoping it won't be too bad, but time will tell.
As we already do compression in the app, ZFS gives us a compression factor of only ~1.1x on these MongoDB databases. For normal databases (e.g. the configdb) and home directories we get a 2x - 10x compression factor.
Edit: although the setup is new, we've put >8TB of data into it, and soak tested full I/O bound reads for a few days with nothing blowing up.