[SERVER-12087] GSSAPI Auth use ~10% more memory during perf test Created: 13/Dec/13 Updated: 11/Jul/16 Resolved: 30/Jan/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Security |
| Affects Version/s: | 2.5.3 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Rui Zhang (Inactive) | Assignee: | Andy Schwerin |
| Resolution: | Done | Votes: | 0 |
| Labels: | 26qa | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
2.5.5-pre (2013-12-05) & enterprise-e0a4cb5c5d6421805632d680c79e5bbc06853e99-2013-12-11 |
||
| Attachments: |
|
||||
| Issue Links: |
|
||||
| Operating System: | ALL | ||||
| Participants: | |||||
| Description |
|
During QA-402 testing, found out 2.5.5-pre use 10-15% more memory than that of 2.4.8 when running with GSSAPI auth. Here are the details: summary for resident memory usage
Virtual memory usage also big growth, here are the output from testing: GSSAPI Auth/non-SSL: 2.5.5-pre GSSAPI RSS uses 9.48677204876% more than 2.4.8 GSSAPI | 153033.71855 vs 139773.705706 GSSAPI SSL: for comparison, local based auth test does not show memory growth 2.5.5-pre Local RSS uses 2.9495666401 % more than 2.4.8 Local | 46724.4615385 vs 45385.7777778 |
| Comments |
| Comment by Andy Schwerin [ 30/Jan/14 ] |
|
This appears to be per-connection library overhead. We might be able to reduce with extra work, but the overhead is pretty low, so we'll let it go for now. |
| Comment by Rui Zhang (Inactive) [ 28/Jan/14 ] |
|
schwerin will repeat the same test with 2.4.8 first. |
| Comment by Rui Zhang (Inactive) [ 28/Jan/14 ] |
|
schwerin got it, I will run and get back today.. |
| Comment by Andy Schwerin [ 28/Jan/14 ] |
|
rui.zhang, what does memory usage look like if you have 10 batches of 1,000 connections? 100 batches of 1,000 connections? I'm trying to determine if this extra memory footprint is a function of something that rises with the number of simultaneous connections., and if so, what the per-connection byte overhead is. |
| Comment by Daniel Pasette (Inactive) [ 26/Jan/14 ] |
|
rui.zhang, do you have more data here? |
| Comment by Rui Zhang (Inactive) [ 15/Jan/14 ] |
|
had CRAM-MD5 setup, but we do not have its support in our driver (pymongo), got a patch from Bernie, tested it this afternoon, we were getting "AuthenticationFailed SASL(-13): authentication failure: incorrect digest response" will continue work with Bernie to get it working. Just check initial memory usage, does not have any revealing information. Re-run test with latest nightly build, memory usage still show similar trend. will crunching data bit more |
| Comment by Andy Schwerin [ 10/Jan/14 ] |
|
rui.zhang, have you had a chance to run with CRAM-MD5, yet? |
| Comment by Andy Schwerin [ 13/Dec/13 ] |
|
rui.zhang, does memory usage grow as a function of the number of simultaneous authentications in progress, or as a function of time, or as the number of open connections? Does it grown with time if you let the test run for hours? Also, can you run the tests using the CRAM-MD5 auth mechanism? It works just like MONGODB-CR, but is a SASL mechanism, and so uses the SASL machinery. I'm hoping to isolate the source of memory overhead. |