Uploaded image for project: 'Java Driver'
  1. Java Driver
  2. JAVA-2174

Read throughput for a small working set memory

    • Type: Icon: Task Task
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 0.8, 3.0.0
    • Component/s: Performance
    • Labels:
      None

      I believe mongo is supposed to cache rows in memory and simply return results from memory when the working set size is small. I wrote a simple Java client that creates a simple collection with an indexed primary key and a single string field and inserts just one record into the collection, and then repeatedly queries for that record's field using findOne.

      The maximum throughput I get with many threads for the above workload is just ~15K/s on a 2-core machine. I clearly can write an in-memory hashmap based cache that can serve reads at nearly ~2 million/s by returning non-dirty entries directly from memory. How can I make mongo do the same? Isn't it automatically supposed to do this optimization? Or is it?

      I tried using both mmap and wiredtiger but the read throughput hardly changes. All I need is a big map that automatically pages to disk in the background. Can mongo do this?

      PS: Apologies if this is not the right forum for such questions. I tried stackoverflow and dba.stackexchange but didn't get any answers.

            Assignee:
            Unassigned Unassigned
            Reporter:
            avenka V. Arun
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: