[DOCS-15666] [Spark] Description of giving partitioner options via SparkConf is not right Created: 04/Oct/22  Updated: 29/Oct/23  Resolved: 07/Oct/22

Status: Closed
Project: Documentation
Component/s: Spark Connector
Affects Version/s: None
Fix Version/s: None

Type: Typo Priority: Major - P3
Reporter: Mehdi El Hajami Assignee: Christopher Cho
Resolution: Fixed Votes: 0
Labels: spark
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File image-2022-10-04-10-45-11-885.png    
Participants:
Days since reply: 1 year, 17 weeks, 5 days ago
Epic Link: DOCSP-20538
Story Points: 1

 Description   

Hello,

In the Mongo Spark Connector documentation, there seems to be a mistake. We're asked to use spark.mongodb.read.partitionerOptions instead of partitioner.options in case we give configuration via Spark.

https://www.mongodb.com/docs/spark-connector/current/configuration/read/

So for example:

partitioner.options.partition.field spark.mongodb.read.partitionerOptions.partition.field

But this seems to be wrong looking at the code, it should be :

 
If you use SparkConf to set the connector's read configurations, prefix each property with
spark.mongodb.read.partitioner.options. instead of partitioner.options.
 



 Comments   
Comment by Christopher Cho [ 07/Oct/22 ]

Thanks for reporting this issue mehdi.elhajami.pro@gmail.com, we've updated the documentation to include the fix!

Comment by Christopher Cho [ 07/Oct/22 ]

https://github.com/mongodb/docs-spark-connector/pull/139

Generated at Thu Feb 08 08:13:31 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.