[DOCS-9166] how to handle when loading data large than spark memory Created: 18/Oct/16 Updated: 18/May/21 Resolved: 18/May/21 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | Spark Connector |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Docs Collector User (Inactive) | Assignee: | Unassigned |
| Resolution: | Done | Votes: | 0 |
| Labels: | collector-298ba4e7 | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Location: https://docs.mongodb.com/spark-connector/faq/ |
||
| Participants: | |
| Days since reply: | 2 years, 38 weeks, 1 day ago |
| Epic Link: | DOCSP-6205 |
| Description |
|
Hi Support, I have a question about Mongo connector with Spark, if I have a large connection in MongoDB, that data size is larger than total memory of Spark cluster, how does it handle? could it throw OOM issue? if so, how can I solve it, add some configuration in ReadConfig? Thanks |
| Comments |
| Comment by Anthony Sansone (Inactive) [ 18/May/21 ] |
|
Per ross.lawley, closing this ticket as not needed. |