-
Type: Bug
-
Resolution: Won't Fix
-
Priority: Major - P3
-
None
-
Affects Version/s: 1.9.0
-
Component/s: None
-
Labels:None
-
Environment:Linux + NFS with data files & journal files stored on the NFS mount
-
ALL
NFS data + large updates causes a very "spiky" resident memory.
When journaling is added, the problem is worse. The system seems to evacuate large amounts of data during updates. This results in a significant number page faults. This page faulting can slow down the updates dramatically.
Two samples from mongostat running the same set of updates.
mongostat_nfs_journal.out:
update flushes mapped vsize res faults locked %idx miss %time
0 0 464m 1.08g 139m 0 90.1 0 18:21:44
0 0 464m 1.08g 286m 0 70.1 0 18:21:49
3126 0 464m 1.08g 37m 44 98.9 0 18:21:51
3836 0 464m 1.08g 49m 27 86.1 0 18:21:53
3801 0 464m 1.08g 62m 23 80.8 0 18:21:55
3952 0 464m 1.08g 65m 26 84.6 0 18:21:57
3685 0 464m 1.08g 75m 23 82.7 0 18:21:59
4027 0 464m 1.08g 88m 27 82.6 0 18:22:01
mongostat_local_journal.out:
update flushes mapped vsize res faults locked %idx miss %time
0 0 464m 1.07g 162m 0 100 0 18:49:36
0 0 464m 1.07g 287m 0 41.4 0 18:49:40
4231 0 464m 1.07g 158m 0 113 0 18:49:42
4280 0 464m 1.07g 161m 0 84.4 0 18:49:44
4184 0 464m 1.07g 154m 0 80.7 0 18:49:46
4163 0 464m 1.07g 167m 0 86 0 18:49:48
4207 0 464m 1.07g 166m 0 92.5 0 18:49:50
Observing lines 1,2,3 of each file:
- Journaling+NFS: res goes from 139M to 286M to 37M
- Journaling+local: res goes from 162M to 287M to 162M
- Without journaling there are no jumps here
- This machine has 8GBs of RAM, so there should not be a need to flush any memory