org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 4

tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Spark runs out of memory without caching

    Stack Overflow | 5 months ago | Adetiloye Philip Kehinde
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 4
  2. 0

    Spark Streaming 2.0 GC Error (Shuffle Issue)

    Stack Overflow | 5 months ago | theMadKing
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
  3. 0

    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0

    Stack Overflow | 1 week ago | Ravi Ranjan
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark SQL : Join operation failure

    Stack Overflow | 4 days ago | jatinpreet
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0
  6. 0

    [Java]org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0

    Stack Overflow | 11 months ago | Y0gesh Gupta
    org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0

    6 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.shuffle.MetadataFetchFailedException

      Missing an output location for shuffle 4

      at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply()
    2. Spark
      MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply
      1. org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:695)
      2. org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:691)
      2 frames
    3. Scala
      TraversableLike$WithFilter.foreach
      1. scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
      2. scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
      3. scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
      4. scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
      4 frames
    4. Spark
      CoGroupedRDD$$anonfun$compute$2.apply
      1. org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:691)
      2. org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:145)
      3. org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49)
      4. org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$2.apply(CoGroupedRDD.scala:148)
      5. org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$2.apply(CoGroupedRDD.scala:137)
      5 frames
    5. Scala
      TraversableLike$WithFilter.foreach
      1. scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
      2. scala.collection.immutable.List.foreach(List.scala:381)
      3. scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
      3 frames
    6. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.CoGroupedRDD.compute(CoGroupedRDD.scala:137)
      2. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      3. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      4. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      5. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      6. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      7. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      8. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      9. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      10. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      11. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      12. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      13. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      14. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      15. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      16. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      17. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      18. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      19. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      20. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      21. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      22. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      23. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      24. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      25. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
      26. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
      27. org.apache.spark.scheduler.Task.run(Task.scala:85)
      28. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      28 frames
    7. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames