Uploaded image for project: 'Spark Connector'
  1. Spark Connector
  2. SPARK-171

Using MongoSparkConnector in hadoop- Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 13127: 'cursor id 42591143425 didn't exist on server.'

    • Type: Icon: Bug Bug
    • Resolution: Works as Designed
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 2.2.0
    • Component/s: None
    • Labels:
    • Environment:
      Unix, Hadoop

      Using MongoSparkConnector in hadoop- Exception in thread "main" com.mongodb.MongoCommandException: Command failed with error 13127: 'cursor id 42591143425 didn't exist on server.'

      The full response is

      { "ok" : 0.0, "errmsg" : "cursor id 42591143425 didn't exist on server.", "code" : 13127, "codeName" : "Location13127" }

      at com.mongodb.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:115)
      at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:107)
      at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:159)
      at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289)
      at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:176)
      at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:216)
      at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:207)
      at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:113)
      at com.mongodb.operation.AggregateOperation$1.call(AggregateOperation.java:269)
      at com.mongodb.operation.AggregateOperation$1.call(AggregateOperation.java:265)
      at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:433)
      at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
      at com.mongodb.operation.AggregateOperation.execute(AggregateOperation.java:265)
      at com.mongodb.operation.AggregateOperation.execute(AggregateOperation.java:69)
      at com.mongodb.Mongo.execute(Mongo.java:810)
      at com.mongodb.Mongo$2.execute(Mongo.java:797)
      at com.mongodb.OperationIterable.iterator(OperationIterable.java:47)
      at com.mongodb.OperationIterable.forEach(OperationIterable.java:70)
      at com.mongodb.OperationIterable.into(OperationIterable.java:82)
      at com.mongodb.AggregateIterableImpl.into(AggregateIterableImpl.java:144)
      at com.mongodb.spark.rdd.partitioner.MongoSamplePartitioner$$anonfun$8.apply(MongoSamplePartitioner.scala:103)
      at com.mongodb.spark.rdd.partitioner.MongoSamplePartitioner$$anonfun$8.apply(MongoSamplePartitioner.scala:97)
      at com.mongodb.spark.MongoConnector$$anonfun$withCollectionDo$1.apply(MongoConnector.scala:186)
      at com.mongodb.spark.MongoConnector$$anonfun$withCollectionDo$1.apply(MongoConnector.scala:184)
      at com.mongodb.spark.MongoConnector$$anonfun$withDatabaseDo$1.apply(MongoConnector.scala:171)
      at com.mongodb.spark.MongoConnector$$anonfun$withDatabaseDo$1.apply(MongoConnector.scala:171)
      at com.mongodb.spark.MongoConnector.withMongoClientDo(MongoConnector.scala:154)
      at com.mongodb.spark.MongoConnector.withDatabaseDo(MongoConnector.scala:171)
      at com.mongodb.spark.MongoConnector.withCollectionDo(MongoConnector.scala:184)
      at com.mongodb.spark.rdd.partitioner.MongoSamplePartitioner.partitions(MongoSamplePartitioner.scala:96)
      at com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions(DefaultMongoPartitioner.scala:34)
      at com.mongodb.spark.rdd.MongoRDD.getPartitions(MongoRDD.scala:137)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
      at scala.Option.getOrElse(Option.scala:121)
      at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
      at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
      at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
      at org.apache.spark.api.java.JavaRDDLike$class.count(JavaRDDLike.scala:455)
      at org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:45)

            Assignee:
            ross@mongodb.com Ross Lawley
            Reporter:
            satindra Satindra Mohan
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: