-
Type: Bug
-
Resolution: Fixed
-
Priority: Unknown
-
Affects Version/s: None
-
Component/s: Field Level Encryption
-
Not Needed
-
We currently have client side encryption setup to encrypt all our sensitive files stored in MongoDB/GridFS. However, we recently encountered a problem where encrypting a file of around 100-200MB will use up more than 2-3GB of memory per file, and this continues to accumulate in RAM usage until the GC is called. Of course, the memory usage isn’t linear, for example for around 100 files, the memory spike is only up to 10-15GB for example. But since we are running in kubernetes/docker, the GC doesn’t seem to know the actual limit of memory where it needs to be called, ending with the pod being deleted due to memory pressure on the node in certain cases.
I’m not sure if we’re doing something wrong, or if there’s a way to limit the memory usage for encrypting those somewhat large files. Below is the code we are using for the encryption:
var encryptOptions = new EncryptOptions( algorithm: EncryptionAlgorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Deterministic.ToString(), keyId: dataKeyId); var reader = new BinaryReader(stream); var data = new BsonBinaryData(reader.ReadBytes((int)stream.Length), BsonBinarySubType.Binary); // This is the line below that causes the memory spike var encryptedData = clientEncryption.Encrypt(data, encryptOptions, cancellationToken); using var encryptedStream = new MemoryStream(encryptedData.Bytes); base.WriteFileAsync(fileName, fileMetadata, encryptedStream, cancellationToken);
Created from this MongoDB Community Forums post.
- related to
-
CSHARP-5065 Dispose RawBsonDocument in ExplicitEncryptionLibMongoCryptController.UnwrapValue
- Closed