Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via spark-user by Yiannis Gkoufas, 1 year ago
Missing an output location for shuffle 0
via spark-user by Yiannis Gkoufas, 1 year ago
Missing an output location for shuffle 0
via apache.org by Unknown author, 2 years ago
Missing an output location for shuffle 0
via amazon.com by Unknown author, 2 years ago
Missing an output location for shuffle 0
via amazon.com by Unknown author, 2 years ago
Missing an output location for shuffle 0
via xluat.com by Unknown author, 2 years ago
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an
 output location for shuffle 0	at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:384)	at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:381)	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)	at org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:380)	at org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:176)	at org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)	at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)	at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:93)	at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:92)	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)	at org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:109)	at org.apache.spark.storage.BlockManager.dataSerializeStream(BlockManager.scala:1177)	at org.apache.spark.storage.DiskStore.putIterator(DiskStore.scala:78)	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:787)	at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:638)	at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:145)	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:245)	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)	at org.apache.spark.scheduler.Task.run(Task.scala:56)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)