[JAVA-2174] Read throughput for a small working set memory Created: 22/Apr/16  Updated: 11/Sep/19  Resolved: 25/Apr/16

Status: Closed
Project: Java Driver
Component/s: Performance
Affects Version/s: 0.8, 3.0.0
Fix Version/s: None

Type: Task Priority: Major - P3
Reporter: V. Arun Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

I believe mongo is supposed to cache rows in memory and simply return results from memory when the working set size is small. I wrote a simple Java client that creates a simple collection with an indexed primary key and a single string field and inserts just one record into the collection, and then repeatedly queries for that record's field using findOne.

The maximum throughput I get with many threads for the above workload is just ~15K/s on a 2-core machine. I clearly can write an in-memory hashmap based cache that can serve reads at nearly ~2 million/s by returning non-dirty entries directly from memory. How can I make mongo do the same? Isn't it automatically supposed to do this optimization? Or is it?

I tried using both mmap and wiredtiger but the read throughput hardly changes. All I need is a big map that automatically pages to disk in the background. Can mongo do this?

PS: Apologies if this is not the right forum for such questions. I tried stackoverflow and dba.stackexchange but didn't get any answers.



 Comments   
Comment by Jeffrey Yemin [ 25/Apr/16 ]

Please link to the StackOverflow question and we'll see if we can get someone to take a look at it.

Generated at Thu Feb 08 08:56:32 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.