Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via JIRA by Calvin Jia, 1 year ago
File does not exist: /test/ABC/ABC
via JIRA by Calvin Jia, 2 years ago
File does not exist: /test/ABC/ABC
via Stack Overflow by stholy
, 1 year ago
File does not exist: /app/hadoop/jobs/nw_single_pred_in/predict
via Stack Overflow by Eric W
, 2 years ago
File does not exist: /user/eric.waite/temp/preparePreferenceMatrix/numUsers.bin
via Google Groups by Gagandeep Singh, 2 years ago
File does not exist: /mnt/var/lib/hadoop/tmp/mapred/staging/hadoop/.staging/job_201405050818_0001/job.split
java.io.FileNotFoundException: File does not exist: /test/ABC/ABC at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1843) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1834) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:578) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:154) at tachyon.hadoop.HdfsFileInputStream.getHdfsInputStream(HdfsFileInputStream.java:101) at tachyon.hadoop.HdfsFileInputStream.seek(HdfsFileInputStream.java:246) at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:37) at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:87) at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51) at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236) at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212) at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:695)