[SERVER-57855] Performance degradation in docker when limit the resource with --cpus Created: 21/Jun/21  Updated: 22/Feb/22  Resolved: 22/Feb/22

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: fred chen Assignee: Edwin Zhou
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
is duplicated by SERVER-57856 Performance degradation in docker whe... Closed
Operating System: ALL
Participants:

 Description   

I deployed a mongodb with docker for testing, with following cmd

docker run --rm --name mongodb -p 27017:27017 -v /mnt/mongo/data:/data/db -m 2g --cpus=2 -d mongo:latest

The host VM is a Ubuntu18.04, with 4 cores and 16G mem.

The testing works as following:

Insert 2,000,000 documents into mongodb with python.

the document is like following:

{
    event_time: '2021-06-02T11:40:26',
    src_addr: '192.168.75.190',
    src_port: 58612,
    dst_addr: '161.125.178.34',
    dst_port: 786,
    url: 'http://www.orozco.net/',
    mime_type: 'text/css',
    md5: '6aa1af2ba586a0dfae3f880e155f9fdb',
    file_name: 'apply.jpeg',
    server_token: 'Varnish',
    user_agent: 'Mozilla/5.0 (compatible; MSIE 8.0; Windows 98; Trident/4.1)'
}

Index were set on: event_time, src_addr, src_port, dst_addr, dst_port and file_name.

The performance is extremely slow: about 200/s.

I adjust the host VM to 2 cores and 16G mem, and redo the testing, the interesting thing happened. The inserting performance became about 15000/s.

I think the key point is the core number of the host VM and the index.

When I remove the index, the performance is about 25000/s even the limited core number is less than the host VM's core.

When the index is setup, the performance became very slow when the limited core number is not equal to the host VM's.

Thanks!

Fred



 Comments   
Comment by Edwin Zhou [ 22/Feb/22 ]

Hi chw_throx@163.com,

We haven’t heard back from you for some time, so I’m going to close this ticket. If this is still an issue for you, please provide additional information and we will reopen the ticket.

Best,
Edwin

Comment by Edwin Zhou [ 14/Feb/22 ]

Hi chw_throx@163.com,

We still need additional information to diagnose the problem. If this is still an issue for you, would you please provide updated diagnostics that demonstrate the regression you're seeing?

Best,
Edwin

Comment by Edwin Zhou [ 27/Jan/22 ]

Hi chw_throx@163.com,

I apologize for the extended delay in this investigation. The diagnostic data that I have for this ticket no longer appears to match with the scenarios that you've described. In particular, I have data from your tests ran with the following resources:

  1. 4 cores and 16gb memory
  2. 4 cores and 8gb memory

Since we're investigating a regression related to changing the number of cores, can you provide again the data from your tests and upload them to this upload portal?

Again, I sincerely apologize for not following up earlier in this investigation.

Gratefully,
Edwin

Comment by Eric Sedor [ 28/Jul/21 ]

Thanks Fred, we'll take a look.

Comment by fred chen [ 22/Jul/21 ]

Hi Eric:

Sorry for my late response. My result was late because I found something interesting.

It differ with previous one.

 

I setup a host vm with 4 logic cores and 16g memory.

Test1:

Run a mongodb with 1 core and 2g memory limit.

As I can see using "docker stats",the memory usage remains at about 500MB (1/4 of 2GB), and then it begin to use disk (the BLOCK I/O raise up heavily). Since it take too long, I stop the testing. According to my previous testing, the qps should be about 200 records/s

Test2:

Run the mongodb with 1core and 8g memory limit.

It use about 1.2GB memory, and finished quickly. The qps is about 6000.

Test3:

Run the mongodb with 1core and 16g memory limit.

It also use about 1.2GB memory, and finished quickly. The qps is about 12000.

Test4:

Reset the host vm to 2 logic cores and 16g memory.

It use about 1.2GB memory, and finished quickly. The qps is about 6000.

 

It seems to be related with memory setting.

I have upload the data of Test2 and Test3. Please help to check.

 

Thanks

Fred

 

 

 

 

Comment by Eric Sedor [ 15/Jul/21 ]

Hi chw_throx@163.com, I just wanted to see if you've had a chance to collect logs and diagnostic data and upload them

Comment by Eric Sedor [ 02/Jul/21 ]

Thanks Fred!

Comment by fred chen [ 02/Jul/21 ]

Hi Eric:

 

Sorry for my late response for I didn't work on this recent days.

What you said is true, the performance drop about 90% when the VM cores increased from 2 to 4.

I'll collect the logs next week.

 

Thanks

Fred

Comment by Eric Sedor [ 01/Jul/21 ]

Hi chw_throx@163.com, I wanted to check in to clarify the above but also to go ahead and open this upload portal for you. Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time.

For each test, we'd be interested in reviewing an archive (tar or zip) the mongod.log files and the $dbpath/diagnostic.data directory (the contents are described here).

Comment by Eric Sedor [ 23/Jun/21 ]

Hi chw_throx@163.com,

I'd like to clarify the tests you are describing, for just the cases where the index(es) exist. It sounds like the test characteristics you're reporting are:

Test A
The host VM is a Ubuntu18.04, with 2 cores and 16G mem.
Docker is run with -m 2g --cpus=2
Insert rate seems to be 15000/s.

Test B
The host VM is a Ubuntu18.04, with 4 cores and 16G mem.
Docker is run with -m 2g --cpus=2
Insert rate seems to be 200/s.

If I understand this correctly, it sounds like you are saying you see a ~98% throughput reduction by increasing the number of cores on the VM hosting your docker image from 2 to 4. Or have I misunderstood?

Comment by fred chen [ 21/Jun/21 ]

mongo:latest is 4.4.6 when the testing is performed.

Generated at Thu Feb 08 05:42:58 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.