java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set()

DataStax JIRA | Jacek Lewandowski | 2 years ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    In 1.2.0 the failing tests are in {{RDDSpec}}: {noformat} java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set() at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44) at org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) WARN 16:12:59,591 org.apache.spark.Logging$class (Logging.scala:71) - Lost task 0.0 in stage 37.0 (TID 223, localhost): java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set() at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44) at org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ERROR 16:12:59,591 org.apache.spark.Logging$class (Logging.scala:75) - Task 0 in stage 37.0 failed 1 times; aborting job [info] - should be repartitionable *** FAILED *** (42 milliseconds) [info] org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 37.0 failed 1 times, most recent failure: Lost task 0.0 in stage 37.0 (TID 223, localhost): java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set() [info] at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44) [info] at org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113) [info] at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212) [info] at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211) [info] at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) [info] at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) [info] at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) [info] at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) [info] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) [info] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) [info] at org.apache.spark.scheduler.Task.run(Task.scala:56) [info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) [info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [info] at java.lang.Thread.run(Thread.java:745) [info] [info] Driver stacktrace: [info] at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) [info] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) [info] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) [info] at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) [info] at scala.Option.foreach(Option.scala:236) [info] at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) [info] at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) [info] at akka.actor.Actor$class.aroundReceive(Actor.scala:465) [info] at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375) [info] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) [info] at akka.actor.ActorCell.invoke(ActorCell.scala:487) [info] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) [info] at akka.dispatch.Mailbox.run(Mailbox.scala:220) [info] at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) [info] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [info] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [info] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [info] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) {noformat}

    DataStax JIRA | 2 years ago | Jacek Lewandowski
    java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set()
  2. 0

    In 1.2.0 the failing tests are in {{RDDSpec}}: {noformat} java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set() at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44) at org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) WARN 16:12:59,591 org.apache.spark.Logging$class (Logging.scala:71) - Lost task 0.0 in stage 37.0 (TID 223, localhost): java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set() at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44) at org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212) at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ERROR 16:12:59,591 org.apache.spark.Logging$class (Logging.scala:75) - Task 0 in stage 37.0 failed 1 times; aborting job [info] - should be repartitionable *** FAILED *** (42 milliseconds) [info] org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 37.0 failed 1 times, most recent failure: Lost task 0.0 in stage 37.0 (TID 223, localhost): java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set() [info] at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44) [info] at org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113) [info] at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212) [info] at org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211) [info] at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) [info] at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366) [info] at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211) [info] at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) [info] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) [info] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) [info] at org.apache.spark.scheduler.Task.run(Task.scala:56) [info] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) [info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [info] at java.lang.Thread.run(Thread.java:745) [info] [info] Driver stacktrace: [info] at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) [info] at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) [info] at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) [info] at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) [info] at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) [info] at scala.Option.foreach(Option.scala:236) [info] at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) [info] at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) [info] at akka.actor.Actor$class.aroundReceive(Actor.scala:465) [info] at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375) [info] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) [info] at akka.actor.ActorCell.invoke(ActorCell.scala:487) [info] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) [info] at akka.dispatch.Mailbox.run(Mailbox.scala:220) [info] at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) [info] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [info] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [info] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [info] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) {noformat}

    DataStax JIRA | 2 years ago | Jacek Lewandowski
    java.lang.IllegalArgumentException: ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set()
  3. 0

    Universal Image Loader : IllegalArgumentException when using FileNameGenerator with extension

    Stack Overflow | 2 years ago
    java.lang.IllegalArgumentException: keys must match regex [a-z0-9_-] {1,64}: "1828294.jpg" at com.nostra13.universalimageloader.cache.disc.impl.ext.DiskLruCache.valida teKey(DiskLruCache.java:697) at com.nostra13.universalimageloader.cache.disc.impl.ext.DiskLruCache.get(Di skLruCache.java:414) at com.nostra13.universalimageloader.cache.disc.impl.ext.LruDiscCache.get(Lr uDiscCache.java:133) at com.nostra13.universalimageloader.core.ImageLoaderEngine$1.run(ImageLoade rEngine.java:72) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java :1112) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav a:587)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    [elasticsearch] elasticsearch couchdb-river startup issues - Grokbase

    grokbase.com | 9 months ago
    java.lang.IllegalArgumentException: URI can't be null. at sun.net.spi.DefaultProxySelector.select(DefaultProxySelector.java:141) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:925) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1299) at org.elasticsearch.river.couchdb.CouchdbRiver$Slurper.run(CouchdbRiver.java:468)
  6. 0

    java.lang.IllegalArgumentException: argument type mismatch... WHEN DEPLOYING SIMPLE Service Assembly with servicemix-bean

    apache.org | 1 year ago
    java.lang.IllegalArgumentException: argument type mismatch</loc-message> <stack-trace></stack-trace> </msg-loc-info> </exception-info> </task-result-details> </component-task-result-details> </component-task-result> at org.apache.servicemix.common.ManagementSupport.failure(ManagementSupport.java:46)[93:servicemix-common:2011.01.0.fuse-03-01] at org.apache.servicemix.common.AbstractDeployer.failure(AbstractDeployer.java:43)[93:servicemix-common:2011.01.0.fuse-03-01] at org.apache.servicemix.common.xbean.AbstractXBeanDeployer.deploy(AbstractXBeanDeployer.java:118)[93:servicemix-common:2011.01.0.fuse-03-01] at org.apache.servicemix.common.BaseServiceUnitManager.doDeploy(BaseServiceUnitManager.java:88)[93:servicemix-common:2011.01.0.fuse-03-01] at org.apache.servicemix.common.BaseServiceUnitManager.deploy(BaseServiceUnitManager.java:69)[93:servicemix-common:2011.01.0.fuse-03-01] at org.apache.servicemix.jbi.deployer.artifacts.ServiceUnitImpl.deploy(ServiceUnitImpl.java:104)[96:org.apache.servicemix.jbi.deployer:1.4.0.fuse-03-01] at org.apache.servicemix.jbi.deployer.impl.ServiceAssemblyInstaller.deploySUs(ServiceAssemblyInstaller.java:207)[96:org.apache.servicemix.jbi.deployer:1.4.0.fuse-03-01] at org.apache.servicemix.jbi.deployer.impl.ServiceAssemblyInstaller.install(ServiceAssemblyInstaller.java:85)[96:org.apache.servicemix.jbi.deployer:1.4.0.fuse-03-01] at org.apache.servicemix.jbi.deployer.impl.Deployer.onBundleStarted(Deployer.java:334)[96:org.apache.servicemix.jbi.deployer:1.4.0.fuse-03-01] at org.apache.servicemix.jbi.deployer.impl.Deployer.bundleChanged(Deployer.java:264)[96:org.apache.servicemix.jbi.deployer:1.4.0.fuse-03-01] at org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:919)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:227)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:149)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.internal.core.Framework.publishBundleEventPrivileged(Framework.java:1349)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.internal.core.Framework.publishBundleEvent(Framework.java:1300)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:380)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:284)[osgi-3.6.0.v20100517.jar:] at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:276)[osgi-3.6.0.v20100517.jar:] at org.apache.karaf.shell.osgi.RestartBundle.doExecute(RestartBundle.java:32)[27:org.apache.karaf.shell.osgi:2.1.6.fuse-01-01] at org.apache.karaf.shell.osgi.BundlesCommand.doExecute(BundlesCommand.java:49)[27:org.apache.karaf.shell.osgi:2.1.6.fuse-01-01] at org.apache.karaf.shell.console.OsgiCommandSupport.execute(OsgiCommandSupport.java:38)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.commands.basic.AbstractCommand.execute(AbstractCommand.java:35)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.runtime.shell.CommandProxy.execute(CommandProxy.java:50)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.runtime.shell.Closure.execute(Closure.java:229)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.runtime.shell.Closure.executeStatement(Closure.java:162)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.runtime.shell.Pipe.run(Pipe.java:101)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.runtime.shell.Closure.execute(Closure.java:79)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.felix.gogo.runtime.shell.CommandSessionImpl.execute(CommandSessionImpl.java:71)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01] at org.apache.karaf.shell.console.jline.Console.run(Console.java:170)[14:org.apache.karaf.shell.console:2.1.6.fuse-01-01]

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      ReplicaPartitioner can only determine the partition of a tuple whose key is a non-empty Set[InetAddress]. Invalid key: Set()

      at com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition()
    2. spark-cassandra-connector
      ReplicaPartitioner.getPartition
      1. com.datastax.spark.connector.rdd.partitioner.ReplicaPartitioner.getPartition(ReplicaPartitioner.scala:44)
      1 frame
    3. Spark
      ExternalSorter$$anonfun$insertAll$1.apply
      1. org.apache.spark.util.collection.ExternalSorter.org$apache$spark$util$collection$ExternalSorter$$getPartition(ExternalSorter.scala:113)
      2. org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:212)
      3. org.apache.spark.util.collection.ExternalSorter$$anonfun$insertAll$1.apply(ExternalSorter.scala:211)
      3 frames
    4. Scala
      Iterator$$anon$11.next
      1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      1 frame
    5. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366)
      2. org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211)
      3. org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
      4. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      5. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      6. org.apache.spark.scheduler.Task.run(Task.scala:56)
      7. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
      7 frames
    6. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames