Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by Weng Shao Fong, 1 year ago
The DefaultMongoPartitioner requires MongoDB >= 3.2
java.lang.UnsupportedOperationException: The DefaultMongoPartitioner 
requires MongoDB >= 3.2	at com.mongodb.spark.rdd.partitioner.DefaultMongoPartitioner.partitions(DefaultMongoPartitioner.scala:58)	at com.mongodb.spark.rdd.MongoRDD.getPartitions(MongoRDD.scala:137)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:326)	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)	at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)	at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)	at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)	at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)	at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:606)	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)	at py4j.Gateway.invoke(Gateway.java:280)	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)	at py4j.commands.CallCommand.execute(CallCommand.java:79)	at py4j.GatewayConnection.run(GatewayConnection.java:211)	at java.lang.Thread.run(Thread.java:745)