org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.mahout.math.DenseVector at org.apache.spark.scheduler.DAGScheduler.org <http://org.apache.spark.scheduler.dagscheduler.org/>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)

Google Groups | Vidhya Ramaswamy | 2 years ago
  1. 0

    Error in pio train for SimilarityAnalysis.cooccurrencesIDSs : Task not serializable

    Google Groups | 2 years ago | Vidhya Ramaswamy
    org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.mahout.math.DenseVector at org.apache.spark.scheduler.DAGScheduler.org <http://org.apache.spark.scheduler.dagscheduler.org/>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
  2. 0

    Spark cluster computing framework

    gmane.org | 1 year ago
    org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:526) org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68) org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60) $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73) org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:79) org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34) org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29) org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) org.apache.spark.SparkContext.broadcast(SparkContext.scala:1051) $apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:839) $apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778) org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354) org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
  3. 0

    Run Spark job on Playframework + Spark Master/Worker in one Mac

    Stack Overflow | 2 years ago | TomoyaIgarashi
    java.lang.ClassNotFoundException: controllers.Application$$anonfun$index$1$$anonfun$3 java.net.URLClassLoader$1.run(URLClassLoader.java:372) java.net.URLClassLoader$1.run(URLClassLoader.java:361) java.security.AccessController.doPrivileged(Native Method) java.net.URLClassLoader.findClass(URLClassLoader.java:360) java.lang.ClassLoader.loadClass(ClassLoader.java:424) java.lang.ClassLoader.loadClass(ClassLoader.java:357) java.lang.Class.forName0(Native Method) java.lang.Class.forName(Class.java:340) org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59) java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613) java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) java.io.ObjectInputStream.readObject(ObjectInputStream.java:371) org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) org.apache.spark.scheduler.Task.run(Task.scala:54) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Driver stacktrace:]]
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    pyspark-hbase.py ยท GitHub

    github.com | 1 year ago
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 14.0 in stage 6.0 (TID 23) had a not serializable result: org.apache.hadoop.hbase.io.ImmutableBytesWritable Serialization stack: - object not serializable (class: org.apache.hadoop.hbase.io.ImmutableBytesWritable, value: 6c 61 73 74 5f 65 6e 74 69 74 79 5f 62 61 74 63 68) - field (class: scala.Tuple2, name: _1, type: class java.lang.Object) - object (class scala.Tuple2, (6c 61 73 74 5f 65 6e 74 69 74 79 5f 62 61 74 63 68,keyvalues={last_entity_batch/c:d/1441414881172/Put/vlen=5092/mvcc=0})) - element of array (index: 0) - array (class [Lscala.Tuple2;, size 1)
  6. 0

    Running ADAM transform results in memory errors

    GitHub | 2 years ago | ansalaza
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 25, localhost): java.lang.OutOfMemoryError: Java heap space com.esotericsoftware.kryo.util.IdentityObjectIntMap.resize(IdentityObjectIntMap.java:410) com.esotericsoftware.kryo.util.IdentityObjectIntMap.putStash(IdentityObjectIntMap.java:227) com.esotericsoftware.kryo.util.IdentityObjectIntMap.push(IdentityObjectIntMap.java:221) com.esotericsoftware.kryo.util.IdentityObjectIntMap.put(IdentityObjectIntMap.java:117) com.esotericsoftware.kryo.util.IdentityObjectIntMap.putStash(IdentityObjectIntMap.java:228) com.esotericsoftware.kryo.util.IdentityObjectIntMap.push(IdentityObjectIntMap.java:221) com.esotericsoftware.kryo.util.IdentityObjectIntMap.put(IdentityObjectIntMap.java:117) com.esotericsoftware.kryo.util.MapReferenceResolver.addWrittenObject(MapReferenceResolver.java:23) com.esotericsoftware.kryo.Kryo.writeReferenceOrNull(Kryo.java:598) com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:566) com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:318) com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293) com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568) org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:156) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Driver stacktrace:

  1. Nikolay Rybak 1 times, last 2 months ago
  2. tyson925 2 times, last 2 months ago
  3. tyson925 1 times, last 4 months ago
  4. meneal 1 times, last 4 months ago
20 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.SparkException

    Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.mahout.math.DenseVector at org.apache.spark.scheduler.DAGScheduler.org <http://org.apache.spark.scheduler.dagscheduler.org/>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)

    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply()
  2. Spark
    DAGScheduler$$anonfun$abortStage$1.apply
    1. org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
    2. org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
    2 frames
  3. Scala
    ArrayBuffer.foreach
    1. scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    2. scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    2 frames
  4. Spark
    DAGScheduler.abortStage
    1. org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
    1 frame