org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0

spark-user | Yiannis Gkoufas | 2 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Re: Running out of space (when there's no shortage)

    apache.org | 1 year ago
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
  2. 0

    Re: Running out of space (when there's no shortage)

    spark-user | 2 years ago | Yiannis Gkoufas
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
  3. 0

    Re: Running out of space (when there's no shortage)

    spark-user | 2 years ago | Yiannis Gkoufas
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: Spark Application Hung

    apache.org | 1 year ago
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 155
  6. 0

    Re: Spark Application Hung

    apache.org | 1 year ago
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 155

    2 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.shuffle.MetadataFetchFailedException

      Missing an output location for shuffle 0

      at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply()
    2. Spark
      MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply
      1. org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:384)
      2. org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:381)
      2 frames
    3. Scala
      ArrayOps$ofRef.map
      1. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      2. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      3. scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
      4. scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
      5. scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
      6. scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
      6 frames
    4. Spark
      CoalescedRDD$$anonfun$compute$1.apply
      1. org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:380)
      2. org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:176)
      3. org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
      4. org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
      5. org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
      6. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
      7. org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
      8. org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:93)
      9. org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:92)
      9 frames
    5. Scala
      Iterator$$anon$11.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      2. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      2 frames
    6. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:109)
      2. org.apache.spark.storage.BlockManager.dataSerializeStream(BlockManager.scala:1177)
      3. org.apache.spark.storage.DiskStore.putIterator(DiskStore.scala:78)
      4. org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:787)
      5. org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:638)
      6. org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:145)
      7. org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:245)
      9. org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
      11. org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
      12. org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
      14. org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
      15. org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      16. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
      17. org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
      18. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      19. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      20. org.apache.spark.scheduler.Task.run(Task.scala:56)
      21. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
      21 frames
    7. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames