Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via JIRA by Sudipta , 2 years ago
Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)
via atlassian.net by Unknown author, 2 years ago
Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)
via JIRA by Sudipta , 1 year ago
Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 8, 162.44.115.223): java.io.IOException: FailedToCheckpointException(message:Failed to rename /tmp/tmp/tachyon/workers/1448540000001/7/31 to /tmp/tmp/tachyon/data/31)	at tachyon.worker.WorkerClient.addCheckpoint(WorkerClient.java:130)	at tachyon.client.TachyonFS.addCheckpoint(TachyonFS.java:228)	at tachyon.client.FileOutStream.close(FileOutStream.java:105)	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:103)	at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:108)	at org.apache.spark.SparkHadoopWriter.close(SparkHadoopWriter.scala:103)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1117)	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1215)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)	at org.apache.spark.scheduler.Task.run(Task.scala:88)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)