[SERVER-17261] mongod rc8/rc9-pre WT OOM Created: 11/Feb/15  Updated: 10/Mar/15  Resolved: 24/Feb/15

Status: Closed
Project: Core Server
Component/s: Storage, WiredTiger
Affects Version/s: 3.0.0-rc8
Fix Version/s: 3.0.0-rc9

Type: Bug Priority: Critical - P2
Reporter: Quentin Conner Assignee: Michael Cahill (Inactive)
Resolution: Done Votes: 1
Labels: 28qa
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File cache.png     PNG File heavy-reads.png     Text File iostat.log     PNG File small-dropout.png     Text File ss.log     HTML File timeseries-ec2-c3_8xl_sysbench_execute_full_cache_oom.html     PNG File timeseries-ec2-c3_8xl_sysbench_execute_full_cache_oom.png    
Backwards Compatibility: Fully Compatible
Participants:

 Description   

Unlike prior RCs, WiredTiger-enabled mongod (rc7, rc8, 2/12 nightly 79492d9cc1885d74b31b5fe24194dbc227096d6e, rc9-pre ea5f871b550c1c3a8a5f0cd749fb47570557a067) in a standalone topology seems to grow the heap without bound until the Linux kernel kills the process. I assume this is the heap growing because dirty .data pages (like those in the WT cache) would simply be paged out (written to block I/O) by the kernel if an acute memory deficit occurs.

We found this in a sysbench-based longevity (stress) test after about 15 hours. To get started, Sysbench loads data (320 million docs) with 8 threads then goes into a 64 thread execute phase with a mix of read and write operations. The OOM occured during the 64-thread execute phase.

We did not see any OOM with a seven day YCSB test. YCSB runs with 8 threads.

We have seen this OOM when running against SSD block storage and with rotating magnetic hard disk.

We have seen the OOM a few times now in rc7 and rc8, only when running the sysbench 64 thread execute workload.

Reproduction steps:

A. procure a multi-socket machine with 12 cores, like a C3 8XL in EC2

B. start with a clean database and a standalone single node of rc8 mongod configured for wiredTiger

rm -rf /data/db/* ; numactl --interleave=all ./mongod --dbpath /data/db --logpath mongodb-sysbench.log --storageEngine wiredTiger --fork

C. checkout the sysbench benchmark and modify config.bash:

git clone https://github.com/tmcallaghan/sysbench-mongodb.git
git checkout 7c8e12916fa1c7a58ff6b36c6ba4bfc28453104c

diff --git a/config.bash b/config.bash
index aaa346d..abf5fcb 100644
--- a/config.bash
+++ b/config.bash
@@ -39,7 +39,7 @@ export NUM_COLLECTIONS=16
 
 # number of documents to maintain per collection
 #   valid values : integer > 0
-export NUM_DOCUMENTS_PER_COLLECTION=10000000
+export NUM_DOCUMENTS_PER_COLLECTION=20000000
 
 # total number of documents to insert per "batch"
 #   valid values : integer > 0
@@ -55,7 +55,8 @@ export NUM_WRITER_THREADS=64
 
 # run the benchmark for this many minutes
 #   valid values : intever > 0
-export RUN_TIME_MINUTES=10
+#export RUN_TIME_MINUTES=10
+export RUN_TIME_MINUTES=10080
 export RUN_TIME_SECONDS=$[RUN_TIME_MINUTES*60]
 
 # write concern for the benchmark client
@@ -106,12 +107,12 @@ export SYSBENCH_DISTINCT_RANGES=1
 
 # number of indexed updates per sysbench "transaction"
 #   valid values : integer >= 0
-export SYSBENCH_INDEX_UPDATES=1
+export SYSBENCH_INDEX_UPDATES=3
 
 # number of non-indexed updates per sysbench "transaction"
 #   valid values : integer >= 0
-export SYSBENCH_NON_INDEX_UPDATES=1
+export SYSBENCH_NON_INDEX_UPDATES=3
 
 # number of delete/insert operations per sysbench "transaction"
 #   valid values : integer >= 0
-export SYSBENCH_INSERTS=1
+export SYSBENCH_INSERTS=2
diff --git a/src/jmongosysbenchexecute.java b/src/jmongosysbenchexecute.java
index bf35445..fa82032 100644
--- a/src/jmongosysbenchexecute.java
+++ b/src/jmongosysbenchexecute.java
@@ -164,8 +164,7 @@ public class jmongosysbenchexecute {
 
         MongoClientOptions clientOptions = new MongoClientOptions.Builder().connectionsPerHost(2048).socketTimeout(60000).writeConcern(myWC).build();
         ServerAddress srvrAdd = new ServerAddress(serverName,serverPort);
-        MongoCredential credential = MongoCredential.createMongoCRCredential(userName, dbName, passWord.toCharArray());
-        MongoClient m = new MongoClient(srvrAdd, Arrays.asList(credential));
+        MongoClient m = new MongoClient(srvrAdd);
 
         logMe("mongoOptions | " + m.getMongoOptions().toString());
         logMe("mongoWriteConcern | " + m.getWriteConcern().toString());
diff --git a/src/jmongosysbenchload.java b/src/jmongosysbenchload.java
index 420039e..cc8a4f1 100644
--- a/src/jmongosysbenchload.java
+++ b/src/jmongosysbenchload.java
@@ -116,8 +116,7 @@ public class jmongosysbenchload {
 
         MongoClientOptions clientOptions = new MongoClientOptions.Builder().connectionsPerHost(2048).socketTimeout(60000).writeConcern(myWC).build();
         ServerAddress srvrAdd = new ServerAddress(serverName,serverPort);
-        MongoCredential credential = MongoCredential.createMongoCRCredential(userName, dbName, passWord.toCharArray());
-        MongoClient m = new MongoClient(srvrAdd, Arrays.asList(credential));
+        MongoClient m = new MongoClient(srvrAdd);
 
         logMe("mongoOptions | " + m.getMongoOptions().toString());
         logMe("mongoWriteConcern | " + m.getWriteConcern().toString());

D. download the 2.12.4 Java driver for mongoDB

curl -O http://central.maven.org/maven2/org/mongodb/mongo-java-driver/2.12.4/mongo-java-driver-2.12.4.jar

E. run the workload

CLASSPATH=`pwd`/mongo-java-driver-2.12.4.jar numactl --interleave=all ./run.simple.bash



 Comments   
Comment by Quentin Conner [ 24/Feb/15 ]

24+ trials completed successfully to verify fixed in rc9 (e6577bc37a2edba81b99146934cf7bad00c6e1b2)

Comment by Quentin Conner [ 24/Feb/15 ]

Using rc9b (e6577bc37a2edba81b99146934cf7bad00c6e1b2), I have not seen the OOM in 24 hours of repeated testing. mongod user process address space totals to 33.4 GB virtual and 33.0 GB resident. Looks good to me.

Comment by Quentin Conner [ 17/Feb/15 ]

full original timeseries data (iostat and serverStatus) for a run leading up to the OOM, attached as iostat.log and ss.log

Comment by Quentin Conner [ 13/Feb/15 ]

Today's pre-rc9 OOM stack trace, from an EC2 machine with SSD as opposed to prior rc8 report on a physical machine with rotating magnetic storage.

2015-02-13T18:25:21.628+0000 F -        [conn26] out of memory.
 
 0xf41659 0xf40f39 0xecef02 0x84d703 0x93b4cd 0x9bbcf4 0x9bcc33 0x9bd82b 0xb8d1e5 0xa9f279 0x7e7220 0xeff71b 0x7f09200c8f18 0x7f091f1dab9d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"B41659"},{"b":"400000","o":"B40F39"},{"b":"400000","o":"ACEF02"},{"b":"400000","o":"44D703"},{"b":"400000","o":"53B4CD"},{"b":"400000","o":"5BBCF4"},{"b":"400000","o":"5BCC33"},{"b":"400000","o":"5BD82B"},{"b":"400000","o":"78D1E5"},{"b":"400000","o":"69F279"},{"b":"400000","o":"3E7220"},{"b":"400000","o":"AFF71B"},{"b":"7F09200C1000","o":"7F18"},{"b":"7F091F0F8000","o":"E2B9D"}],"processInfo":{ "mongodbVersion" : "3.0.0-rc9-pre-", "gitVersion" : "79492d9cc1885d74b31b5fe24194dbc227096d6e", "uname" : { "sysname" : "Linux", "release" : "3.14.20-20.44.amzn1.x86_64", "version" : "#1 SMP Mon Oct 6 22:52:46 UTC 2014", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFBC6D4000", "elfType" : 3 }, { "b" : "7F09200C1000", "path" : "/lib64/libpthread.so.0", "elfType" : 3 }, { "b" : "7F091FEB9000", "path" : "/lib64/librt.so.1", "elfType" : 3 }, { "b" : "7F091FCB5000", "path" : "/lib64/libdl.so.2", "elfType" : 3 }, { "b" : "7F091F9B1000", "path" : "/usr/lib64/libstdc++.so.6", "elfType" : 3 }, { "b" : "7F091F6B3000", "path" : "/lib64/libm.so.6", "elfType" : 3 }, { "b" : "7F091F49D000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3 }, { "b" : "7F091F0F8000", "path" : "/lib64/libc.so.6", "elfType" : 3 }, { "b" : "7F09202DD000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 } ] }}
 mongod(_ZN5mongo15printStackTraceERSo+0x29) [0xf41659]
 mongod(_ZN5mongo29reportOutOfMemoryErrorAndExitEv+0x49) [0xf40f39]
 mongod(_ZN5mongo11mongoMallocEm+0x22) [0xecef02]
 mongod(_ZN5mongo11_BufBuilderINS_16TrivialAllocatorEEC1Ei+0x13) [0x84d703]
 mongod(_ZN5mongo15DistinctCommand3runEPNS_16OperationContextERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x21D) [0x93b4cd]
 mongod(_ZN5mongo12_execCommandEPNS_16OperationContextEPNS_7CommandERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x34) [0x9bbcf4]
 mongod(_ZN5mongo7Command11execCommandEPNS_16OperationContextEPS0_iPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0xC13) [0x9bcc33]
 mongod(_ZN5mongo12_runCommandsEPNS_16OperationContextEPKcRNS_7BSONObjERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi+0x28B) [0x9bd82b]
 mongod(_ZN5mongo8runQueryEPNS_16OperationContextERNS_7MessageERNS_12QueryMessageERKNS_15NamespaceStringERNS_5CurOpES3_b+0x755) [0xb8d1e5]
 mongod(_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortEb+0xB19) [0xa9f279]
 mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0xE0) [0x7e7220]
 mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x32B) [0xeff71b]
 libpthread.so.0(+0x7F18) [0x7f09200c8f18]
 libc.so.6(clone+0x6D) [0x7f091f1dab9d]
-----  END BACKTRACE  -----

Comment by Quentin Conner [ 13/Feb/15 ]

HTML telemetry (open in Chrome for best results, then hit 9).

PNG screen shot with partial telemetry.

Comment by Quentin Conner [ 13/Feb/15 ]

OOM Symptom still present intermittently in pre-rc9 @ git hash 79492d9cc1885d74b31b5fe24194dbc227096d6e using nightly build from 12 February. Retesting with rc9 when it becomes available.

ServerStatus() telemetry for 7949 nightly build is attached as timeseries-ec2-c3_8xl_sysbench_execute_full_cache_oom.html.

A portion of that telemetry is shown here:

Comment by Quentin Conner [ 12/Feb/15 ]

The symptom is intermittent. Observed in one of three trials.
Moving on to other workloads with this equipment now...

Comment by Quentin Conner [ 11/Feb/15 ]

From the kernel's perspective (/var/log/messages):

Feb 11 05:16:47 slave-4 kernel: mongod invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Feb 11 05:16:47 slave-4 kernel: mongod cpuset=/ mems_allowed=0-1
Feb 11 05:16:47 slave-4 kernel: Pid: 26834, comm: mongod Not tainted 2.6.32-431.el6.x86_64 #1
Feb 11 05:16:47 slave-4 kernel: Call Trace:
Feb 11 05:16:47 slave-4 kernel: [<ffffffff810d05b1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81122960>] ? dump_header+0x90/0x1b0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8129032d>] ? __bitmap_intersects+0x1d/0xa0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8122798c>] ? security_real_capable_noaudit+0x3c/0x70
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81122de2>] ? oom_kill_process+0x82/0x2a0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81122d21>] ? select_bad_process+0xe1/0x120
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81123220>] ? out_of_memory+0x220/0x3c0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8112fb3c>] ? __alloc_pages_nodemask+0x8ac/0x8d0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff811651c9>] ? alloc_page_interleave+0x39/0x90
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81167afc>] ? alloc_pages_current+0x10c/0x110
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8111fd57>] ? __page_cache_alloc+0x87/0x90
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8111f73e>] ? find_get_page+0x1e/0xa0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81121695>] ? generic_file_aio_read+0x585/0x700
Feb 11 05:16:47 slave-4 kernel: [<ffffffff81188dba>] ? do_sync_read+0xfa/0x140
Feb 11 05:16:47 slave-4 kernel: [<ffffffff810a07c8>] ? up_read+0x18/0x30
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8109b2a0>] ? autoremove_wake_function+0x0/0x40
Feb 11 05:16:47 slave-4 kernel: [<ffffffff812263c6>] ? security_file_permission+0x16/0x20
Feb 11 05:16:47 slave-4 kernel: [<ffffffff811896a5>] ? vfs_read+0xb5/0x1a0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff811899d2>] ? sys_pread64+0x82/0xa0
Feb 11 05:16:47 slave-4 kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Feb 11 05:16:47 slave-4 kernel: Mem-Info:
Feb 11 05:16:47 slave-4 kernel: Node 0 DMA per-cpu:
Feb 11 05:16:47 slave-4 kernel: CPU    0: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    1: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    2: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    3: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    4: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    5: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    6: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    7: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    8: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    9: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU   10: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU   11: hi:    0, btch:   1 usd:   0
Feb 11 05:16:47 slave-4 kernel: Node 0 DMA32 per-cpu:
Feb 11 05:16:47 slave-4 kernel: CPU    0: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    1: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    2: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    3: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    4: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    5: hi:  186, btch:  31 usd:  30
Feb 11 05:16:47 slave-4 kernel: CPU    6: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    8: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    9: hi:  186, btch:  31 usd:  30
Feb 11 05:16:47 slave-4 kernel: CPU   10: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU   11: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: Node 0 Normal per-cpu:
Feb 11 05:16:47 slave-4 kernel: CPU    0: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    1: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    2: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    3: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    4: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    5: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    6: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    8: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    9: hi:  186, btch:  31 usd:  29
Feb 11 05:16:47 slave-4 kernel: CPU   10: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU   11: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: Node 1 Normal per-cpu:
Feb 11 05:16:47 slave-4 kernel: CPU    0: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    1: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    2: hi:  186, btch:  31 usd:  36
Feb 11 05:16:47 slave-4 kernel: CPU    3: hi:  186, btch:  31 usd:  28
Feb 11 05:16:47 slave-4 kernel: CPU    4: hi:  186, btch:  31 usd:  14
Feb 11 05:16:47 slave-4 kernel: CPU    5: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    6: hi:  186, btch:  31 usd:  30
Feb 11 05:16:47 slave-4 kernel: CPU    7: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    8: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU    9: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU   10: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: CPU   11: hi:  186, btch:  31 usd:   0
Feb 11 05:16:47 slave-4 kernel: active_anon:23225295 inactive_anon:1240638 isolated_anon:576
Feb 11 05:16:47 slave-4 kernel: active_file:193 inactive_file:373 isolated_file:447
Feb 11 05:16:47 slave-4 kernel: unevictable:0 dirty:0 writeback:0 unstable:0
Feb 11 05:16:47 slave-4 kernel: free:71811 slab_reclaimable:5826 slab_unreclaimable:10322
Feb 11 05:16:47 slave-4 kernel: mapped:440 shmem:0 pagetables:50680 bounce:0
Feb 11 05:16:47 slave-4 kernel: Node 0 DMA free:15488kB min:12kB low:12kB high:16kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15076kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Feb 11 05:16:47 slave-4 kernel: lowmem_reserve[]: 0 2991 48441 48441
Feb 11 05:16:47 slave-4 kernel: Node 0 DMA32 free:184516kB min:2780kB low:3472kB high:4168kB active_anon:1515720kB inactive_anon:505176kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):128kB isolated(file):124kB present:3063584kB mlocked:0kB dirty:0kB writeback:4kB mapped:24kB shmem:0kB slab_reclaimable:864kB slab_unreclaimable:216kB kernel_stack:0kB pagetables:3992kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Feb 11 05:16:47 slave-4 kernel: lowmem_reserve[]: 0 0 45450 45450
Feb 11 05:16:47 slave-4 kernel: Node 0 Normal free:42176kB min:42248kB low:52808kB high:63372kB active_anon:44469264kB inactive_anon:2223260kB active_file:600kB inactive_file:264kB unevictable:0kB isolated(anon):1408kB isolated(file):384kB present:46540800kB mlocked:0kB dirty:0kB writeback:0kB mapped:1116kB shmem:0kB slab_reclaimable:10580kB slab_unreclaimable:22896kB kernel_stack:2472kB pagetables:96412kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Feb 11 05:16:47 slave-4 kernel: lowmem_reserve[]: 0 0 0 0
Feb 11 05:16:47 slave-4 kernel: Node 1 Normal free:45064kB min:45064kB low:56328kB high:67596kB active_anon:46916196kB inactive_anon:2234116kB active_file:172kB inactive_file:856kB unevictable:0kB isolated(anon):768kB isolated(file):896kB present:49643520kB mlocked:0kB dirty:84kB writeback:0kB mapped:620kB shmem:0kB slab_reclaimable:11860kB slab_unreclaimable:18176kB kernel_stack:912kB pagetables:102316kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:64 all_unreclaimable? no
Feb 11 05:16:47 slave-4 kernel: lowmem_reserve[]: 0 0 0 0
Feb 11 05:16:47 slave-4 kernel: Node 0 DMA: 2*4kB 1*8kB 1*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15488kB
Feb 11 05:16:47 slave-4 kernel: Node 0 DMA32: 239*4kB 141*8kB 224*16kB 174*32kB 81*64kB 75*128kB 60*256kB 42*512kB 23*1024kB 6*2048kB 21*4096kB = 184740kB
Feb 11 05:16:47 slave-4 kernel: Node 0 Normal: 9548*4kB 131*8kB 1*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 43352kB
Feb 11 05:16:47 slave-4 kernel: Node 1 Normal: 9932*4kB 286*8kB 7*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 46224kB
Feb 11 05:16:47 slave-4 kernel: 142914 total pagecache pages
Feb 11 05:16:47 slave-4 kernel: 141768 pages in swap cache
Feb 11 05:16:47 slave-4 kernel: Swap cache stats: add 3496866, delete 3355098, find 1110552/1271814
Feb 11 05:16:47 slave-4 kernel: Free swap  = 0kB
Feb 11 05:16:47 slave-4 kernel: Total swap = 4194296kB
Feb 11 05:16:47 slave-4 kernel: 25165808 pages RAM
Feb 11 05:16:47 slave-4 kernel: 401628 pages reserved
Feb 11 05:16:47 slave-4 kernel: 1489 pages shared
Feb 11 05:16:47 slave-4 kernel: 24685540 pages non-shared
Feb 11 05:16:47 slave-4 kernel: [ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name
Feb 11 05:16:47 slave-4 kernel: [  606]     0   606     2733        0   0     -17         -1000 udevd
Feb 11 05:16:47 slave-4 kernel: [  965]     0   965     2731        0   3     -17         -1000 udevd
Feb 11 05:16:47 slave-4 kernel: [  969]     0   969     2733        0   0     -17         -1000 udevd
Feb 11 05:16:47 slave-4 kernel: [ 1230]     0  1230     6910       31   8     -17         -1000 auditd
Feb 11 05:16:47 slave-4 kernel: [ 1246]     0  1246    62272       90   6       0             0 rsyslogd
Feb 11 05:16:47 slave-4 kernel: [ 1258]    81  1258     5351        1   6       0             0 dbus-daemon
Feb 11 05:16:47 slave-4 kernel: [ 1294]  1055  1294    30001      101   2       0             0 mqexec
Feb 11 05:16:47 slave-4 kernel: [ 1308]     0  1308    16651        9   6     -17         -1000 sshd
Feb 11 05:16:47 slave-4 kernel: [ 1316]    38  1316     7681       41   6       0             0 ntpd
Feb 11 05:16:47 slave-4 kernel: [ 1392]     0  1392    20318       19   0       0             0 master
Feb 11 05:16:47 slave-4 kernel: [ 1400]     0  1400    29325       22   0       0             0 crond
Feb 11 05:16:47 slave-4 kernel: [ 1403]    89  1403    20381       34   3       0             0 qmgr
Feb 11 05:16:47 slave-4 kernel: [ 1413]  1036  1413    35372       97   1       0             0 munin-node
Feb 11 05:16:47 slave-4 kernel: [ 1452]     0  1452     5385        0   0       0             0 atd
Feb 11 05:16:47 slave-4 kernel: [ 1780]     0  1780    70679      976   4       0             0 chef-client
Feb 11 05:16:47 slave-4 kernel: [ 1809]     0  1809     1020        1   0       0             0 agetty
Feb 11 05:16:47 slave-4 kernel: [ 1810]     0  1810     1016        1   1       0             0 mingetty
Feb 11 05:16:47 slave-4 kernel: [ 1812]     0  1812     1016        1   3       0             0 mingetty
Feb 11 05:16:47 slave-4 kernel: [ 1814]     0  1814     1016        1   1       0             0 mingetty
Feb 11 05:16:47 slave-4 kernel: [ 1816]     0  1816     1016        1   8       0             0 mingetty
Feb 11 05:16:47 slave-4 kernel: [ 1818]     0  1818     1016        1   1       0             0 mingetty
Feb 11 05:16:47 slave-4 kernel: [ 1820]     0  1820     1016        1   2       0             0 mingetty
Feb 11 05:16:47 slave-4 kernel: [ 1822]    89  1822    20337       26   2       0             0 tlsmgr
Feb 11 05:16:47 slave-4 kernel: [ 3187]  9061  3187     6818      244   1       0             0 tmux
Feb 11 05:16:47 slave-4 kernel: [ 3188]  9061  3188    27109        7   1       0             0 bash
Feb 11 05:16:47 slave-4 kernel: [ 3277]  9061  3277    27109        7   1       0             0 bash
Feb 11 05:16:47 slave-4 kernel: [ 6609]     0  6609   166189      521   6       0             0 salt-minion
Feb 11 05:16:47 slave-4 kernel: [24919]  9061 24919 25370650 24167174   2       0             0 mongod
Feb 11 05:16:47 slave-4 kernel: [24934]  9061 24934    26516       38   6       0             0 run.simple.bash
Feb 11 05:16:47 slave-4 kernel: [25160]    89 25160    20338       17   5       0             0 pickup
Feb 11 05:16:47 slave-4 kernel: [26695]  9061 26695  7927842   155568   6       0             0 java
Feb 11 05:16:47 slave-4 kernel: [26696]  9061 26696    25228       24   4       0             0 tee
Feb 11 05:16:47 slave-4 kernel: Out of memory: Kill process 24919 (mongod) score 977 or sacrifice child
Feb 11 05:16:47 slave-4 kernel: Killed process 24919, UID 9061, (mongod) total-vm:101482600kB, anon-rss:96667388kB, file-rss:1372kB

Generated at Thu Feb 08 03:43:49 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.