org.apache.spark.SparkException

Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)

Samebug tips0

There are no available Samebug tips for this exception. If you know how to solve this issue, help other users by writing a short tip.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web2266

  • Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)
  • via JIRA by Sudipta , 11 months ago
    Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)
  • via JIRA by Sudipta , 1 year ago
    Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)
  • Stack trace

    • org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31) at tachyon.worker.WorkerClient.addCheckpoint(WorkerClient.java:130) at tachyon.client.TachyonFS.addCheckpoint(TachyonFS.java:228) at tachyon.client.FileOutStream.close(FileOutStream.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103) at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:108) at org.apache.spark.SparkHadoopWriter.close(SparkHadoopWriter.scala:103) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1117) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1215) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    You’re the first here who have seen this exception.