java.lang.UnsupportedOperationException: The DefaultMongoPartitioner requires MongoDB >= 3.2

Google Groups | Weng Shao Fong | 2 months ago
  1. 0

    Pyspark DataFrameWriter.save() Error

    Google Groups | 2 months ago | Weng Shao Fong
    java.lang.UnsupportedOperationException: The DefaultMongoPartitioner requires MongoDB >= 3.2
  2. 0

    Re: Support for xpath/xquery?

    xml-xmlbeans-user | 1 decade ago | Dmitri_Colebatch@toyota.com.au
    java.lang.UnsupportedOperationException: This operation requires xqrl.jar
  3. 0

    RE: Support for xpath/xquery?

    xml-xmlbeans-user | 1 decade ago | Eric Vasilik
    java.lang.UnsupportedOperationException: This operation requires xqrl.jar
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    [iPOJO] Bug while calling dispose under knopflerfish

    felix-users | 5 years ago | Loic Petit
    java.lang.UnsupportedOperationException: This service requires an advanced creation policy. Before calling the service, call the getService(ComponentInstance) method to get the service object.
  6. 0

    Re: [iPOJO] Bug while calling dispose under knopflerfish

    felix-users | 5 years ago | Clement Escoffier
    java.lang.UnsupportedOperationException: This service requires an advanced creation policy. Before calling the service, call the getService(ComponentInstance) method to get the service object.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.UnsupportedOperationException

      The DefaultMongoPartitioner requires MongoDB >= 3.2

      at com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions()
    2. com.mongodb.spark
      MongoRDD.getPartitions
      1. com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions(DefaultMongoPartitioner.scala:58)
      2. com.mongodb.spark.rdd.MongoRDD.getPartitions(MongoRDD.scala:137)
      2 frames
    3. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      2 frames
    4. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:121)
      1 frame
    5. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      4 frames
    6. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:121)
      1 frame
    7. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      4 frames
    8. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:121)
      1 frame
    9. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      4 frames
    10. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:121)
      1 frame
    11. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      4 frames
    12. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:121)
      1 frame
    13. Spark
      RDD.partitions
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      1 frame
    14. Spark Project SQL
      Dataset.withNewExecutionId
      1. org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:326)
      2. org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
      3. org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
      4. org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
      5. org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
      5 frames