Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via github.com by Unknown author, 1 year ago
via GitHub by erictu
, 2 years ago
Can't zip RDDs with unequal numbers of partitions
java.lang.IllegalArgumentException: Can't zip RDDs with unequal numbers of partitions	at org.apache.spark.rdd.ZippedPartitionsBaseRDD.getPartitions(ZippedPartitionsRDD.scala:56)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)	at org.apache.spark.ShuffleDependency.(Dependency.scala:79)	at org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:80)	at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:191)	at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:189)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.dependencies(RDD.scala:189)	at org.apache.spark.scheduler.DAGScheduler.visit$1(DAGScheduler.scala:298)	at org.apache.spark.scheduler.DAGScheduler.getParentStages(DAGScheduler.scala:310)	at org.apache.spark.scheduler.DAGScheduler.newStage(DAGScheduler.scala:246)	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:723)	at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1333)	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)	at akka.actor.ActorCell.invoke(ActorCell.scala:456)	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)	at akka.dispatch.Mailbox.run(Mailbox.scala:219)	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)