Spark connector 10.1.1 not supporting microbatch processing

XMLWordPrintableJSON

    • Hide

      1. What would you like to communicate to the user about this feature?
      2. Would you like the user to see examples of the syntax and/or executable code and its output?
      3. Which versions of the driver/connector does this apply to?

      Show
      1. What would you like to communicate to the user about this feature? 2. Would you like the user to see examples of the syntax and/or executable code and its output? 3. Which versions of the driver/connector does this apply to?
    • None
    • None
    • None
    • None
    • None
    • None

      Hi Team,

      We are trying out the AWS Glue --> Pyspark connector for MongoDB Atlas.  While trying out the spark streaming, we are getting the following error.

      Error: Data Source com.mongodb.spark.sql.connector.MongoTableProvider does not support microbatch processing.

      Let us know, if you need any further details.

      Appreciate your support.

      Attached is the error screenshot, the Pyspark code used for streaming, and a high-level flow diag.

              Assignee:
              Prakul Agarwal
              Reporter:
              Babu Srinivasan
              None
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: