[DOCS-9166] how  to handle when loading data large than spark memory Created: 18/Oct/16  Updated: 18/May/21  Resolved: 18/May/21

Status: Closed
Project: Documentation
Component/s: Spark Connector
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Docs Collector User (Inactive) Assignee: Unassigned
Resolution: Done Votes: 0
Labels: collector-298ba4e7
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Location: https://docs.mongodb.com/spark-connector/faq/
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
Referrer: https://www.mongodb.com/products/spark-connector
Screen Resolution: 1440 x 900


Participants:
Days since reply: 2 years, 38 weeks, 1 day ago
Epic Link: DOCSP-6205

 Description   

Hi Support,

I have a question about Mongo connector with Spark, if I have a large connection in MongoDB, that data size is larger than total memory of Spark cluster, how does it handle? could it throw OOM issue? if so, how can I solve it, add some configuration in ReadConfig?

Thanks
Yin



 Comments   
Comment by Anthony Sansone (Inactive) [ 18/May/21 ]

Per ross.lawley, closing this ticket as not needed.

Generated at Thu Feb 08 07:57:42 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.