[SERVER-12087] GSSAPI Auth use ~10% more memory during perf test Created: 13/Dec/13  Updated: 11/Jul/16  Resolved: 30/Jan/14

Status: Closed
Project: Core Server
Component/s: Security
Affects Version/s: 2.5.3
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Rui Zhang (Inactive) Assignee: Andy Schwerin
Resolution: Done Votes: 0
Labels: 26qa
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

2.5.5-pre (2013-12-05) & enterprise-e0a4cb5c5d6421805632d680c79e5bbc06853e99-2013-12-11


Attachments: PNG File mem_comparison_248_vs_255p.png    
Issue Links:
Depends
Operating System: ALL
Participants:

 Description   

During QA-402 testing, found out 2.5.5-pre use 10-15% more memory than that of 2.4.8 when running with GSSAPI auth. Here are the details:

summary for resident memory usage

Version RSS Memory % increase
2.4.8 134.98M  
2.5.5-pre (2013-12-05) 148.02M 9.66%
2.4.8 ssl 142.13M  
2.5.5-pre (2013-12-05) 159.37M 12.13%

Virtual memory usage also big growth, here are the output from testing:

GSSAPI Auth/non-SSL:

2.5.5-pre GSSAPI RSS uses 9.48677204876% more than 2.4.8 GSSAPI | 153033.71855 vs 139773.705706
2.5.5-pre GSSAPI VSZ uses 23.4819475555% more than 2.4.8 GSSAPI | 879632.051173 vs 712356.800801

GSSAPI SSL:
2.5.5-pre GSSAPI/SSL RSS uses 11.9241130121% more than 2.4.8 GSSAPI/SSL | 164746.567901 vs 147194.883629
2.5.5-pre GSSAPI/SSL VSZ uses 23.1469306126% more than 2.4.8 GSSAPI/SSL | 886774.061728 vs 720094.327416

for comparison, local based auth test does not show memory growth

2.5.5-pre Local RSS uses 2.9495666401 % more than 2.4.8 Local | 46724.4615385 vs 45385.7777778
2.5.5-pre Local VSZ uses 0.520006033054 % more than 2.4.8 Local | 785475.538462 vs 781412.148148



 Comments   
Comment by Andy Schwerin [ 30/Jan/14 ]

This appears to be per-connection library overhead. We might be able to reduce with extra work, but the overhead is pretty low, so we'll let it go for now.

Comment by Rui Zhang (Inactive) [ 28/Jan/14 ]

schwerin will repeat the same test with 2.4.8 first.

Comment by Rui Zhang (Inactive) [ 28/Jan/14 ]

schwerin got it, I will run and get back today..

Comment by Andy Schwerin [ 28/Jan/14 ]

rui.zhang, what does memory usage look like if you have 10 batches of 1,000 connections? 100 batches of 1,000 connections? I'm trying to determine if this extra memory footprint is a function of something that rises with the number of simultaneous connections., and if so, what the per-connection byte overhead is.

Comment by Daniel Pasette (Inactive) [ 26/Jan/14 ]

rui.zhang, do you have more data here?

Comment by Rui Zhang (Inactive) [ 15/Jan/14 ]

had CRAM-MD5 setup, but we do not have its support in our driver (pymongo), got a patch from Bernie, tested it this afternoon, we were getting "AuthenticationFailed SASL(-13): authentication failure: incorrect digest response" will continue work with Bernie to get it working.

Just check initial memory usage, does not have any revealing information.

Re-run test with latest nightly build, memory usage still show similar trend. will crunching data bit more

Comment by Andy Schwerin [ 10/Jan/14 ]

rui.zhang, have you had a chance to run with CRAM-MD5, yet?

Comment by Andy Schwerin [ 13/Dec/13 ]

rui.zhang, does memory usage grow as a function of the number of simultaneous authentications in progress, or as a function of time, or as the number of open connections? Does it grown with time if you let the test run for hours? Also, can you run the tests using the CRAM-MD5 auth mechanism? It works just like MONGODB-CR, but is a SASL mechanism, and so uses the SASL machinery. I'm hoping to isolate the source of memory overhead.

Generated at Thu Feb 08 03:27:34 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.