[SERVER-5682] writes cause non-trivial read i/o Created: 23/Apr/12  Updated: 15/Aug/12  Resolved: 10/Jul/12

Status: Closed
Project: Core Server
Component/s: Performance
Affects Version/s: 2.0.2
Fix Version/s: None

Type: Question Priority: Minor - P4
Reporter: Dennis Jacobfeuerborn Assignee: Unassigned
Resolution: Incomplete Votes: 0
Labels: performance
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File mongo.png    
Participants:

 Description   

I've run into a problem with a mongodb setup (2.0.2) that is only written to yet I see significant read i/o that slows down the performance quite a bit. The setup is a simple 3-node mongodb cluster (primary, secondary, arbiter) where the primary and secondary nodes have 8gb of ram and run a replicated set.

With a collection size of about 44gb I see a read-to-write ration of 3:1 that is there are actually more reads than writes happening on the system. After renaming the collection and starting a new one from scratch this ration changed to 1:4 which is significantly better but I still don't understand what causes the reads. The entire collection fits in memory right now so I'm not sure what mongo could be reading from the disk all the time.
Once I stop writing to the collection the reads also disappear.



 Comments   
Comment by Ian Whalen (Inactive) [ 10/Jul/12 ]

@dennis I'm closing this for now, but please reopen if you see this again or if you can provide any further info.

Comment by Scott Hernandez (Inactive) [ 23/Apr/12 ]

Please provide the stats for your collection and database. If indexes don't fit in 8GB then it would very likely need to do many reads during inserts. Also, do you even update or remove documents?

Please included those stats when this happens again.

Comment by Dennis Jacobfeuerborn [ 23/Apr/12 ]

The nodes have 8gb of memory.

I attached a graph of cpu utilization during the last month. You can see i/o wait grow until we "reboot" the collection. From then on everything looks good again.

Since this is a production system there isn't much we can do to experiment but it seems as the collection outgrows the ram the i/o becomes increasingly problematic. That would be expected if we'd do a lot of reads on a working set that doesn't fit into memory but for a write-only system these reads should not exist.

At the moment I see 90kb read for 10MB written which looks much more in line with what I would expect. Theoretically it should stay this way even as the collection grows though.

Comment by Scott Hernandez (Inactive) [ 23/Apr/12 ]

Do you have a way of reproducing this behavior?

Also, what does "free -ltm", "iostat -xtm 2" and mongostat show during the time when you don't expect reads happening?

If you have 44GB of data, how much memory do you have?

For your collection how many indexes do you have? What does db.coll.stats() look like?

Comment by Dennis Jacobfeuerborn [ 23/Apr/12 ]

Some additional info: This happens on a Centos system with the filesystem mounted using the relatime option and shutting down the secondary node doesn't change the behaviour of the I/O.
I measured the I/O using "iotop -Pa" to make sure that it is indeed the mongod process that is causing the I/O and not something else on the system even though this is a minimal Centos 6 install with only the mongo rpms installed.

Generated at Thu Feb 08 03:09:36 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.