org.apache.spark.SparkException: Task not serializable org.apache.spark.SparkException: Task not serializable

tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    Spark: Filter task not serializable due to previous cluster algorithm

    Stack Overflow | 8 months ago | Mnemosyne
    org.apache.spark.SparkException: Task not serializable
  2. 0

    Scala Spark dataframe : Task not serilizable exception even with Broadcast variables

    Stack Overflow | 1 year ago | Himaprasoon
    org.apache.spark.SparkException: Task not serializable
  3. 0

    Databricks Apache Spark 1.4: Task not serializable (Scala)

    Stack Overflow | 2 years ago | user3399275
    org.apache.spark.SparkException: Task not serializable
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 22#250968913

    GitHub | 8 months ago | CheungZeeCn
    org.apache.spark.SparkException: Task not serializable
  6. 0

    User Defined Variables in spark - org.apache.spark.SparkException: Task not serializable

    Stack Overflow | 9 months ago | Nitish
    org.apache.spark.SparkException: Task not serializable
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.NotSerializableException

    io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient Serialization stack: - object not serializable (class: io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient, value: io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient@163b6ba7) - field (class: scala.runtime.ObjectRef, name: elem, type: class java.lang.Object) - object (class scala.runtime.ObjectRef, io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient@163b6ba7) - field (class: com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2, name: schemaRegistry$1, type: class scala.runtime.ObjectRef) - object (class com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2, ) - field (class: com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2$$anonfun$apply$1, name: $outer, type: class com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2) - object (class com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2$$anonfun$apply$1, )

    at org.apache.spark.serializer.SerializationDebugger$.improveException()
  2. Spark
    RDD.foreach
    1. org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    2. org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
    3. org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
    4. org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
    5. org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
    6. org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
    7. org.apache.spark.SparkContext.clean(SparkContext.scala:2067)
    8. org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:911)
    9. org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:910)
    10. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    11. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    12. org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    13. org.apache.spark.rdd.RDD.foreach(RDD.scala:910)
    13 frames
  3. com.personal.practice
    KafkaConsumer$$anonfun$main$2.apply
    1. com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2.apply(KafkaConsumer.scala:87)
    2. com.personal.practice.kafka.KafkaConsumer$$anonfun$main$2.apply(KafkaConsumer.scala:84)
    2 frames
  4. Spark Project Streaming
    ForEachDStream$$anonfun$1.apply
    1. org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
    2. org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)
    3. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
    4. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
    5. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
    6. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
    7. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
    8. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
    9. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
    9 frames
  5. Scala
    Try$.apply
    1. scala.util.Try$.apply(Try.scala:161)
    1 frame
  6. Spark Project Streaming
    JobScheduler$JobHandler$$anonfun$run$1.apply
    1. org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
    2. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
    3. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
    4. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
    4 frames
  7. Scala
    DynamicVariable.withValue
    1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    1 frame
  8. Spark Project Streaming
    JobScheduler$JobHandler.run
    1. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
    1 frame
  9. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames