java.io.IOException: Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

Do you know how to solve this issue? Write a tip to help other users and build your expert profile.

Solutions on the web

via GitHub by ww102111
, 1 year ago
Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)
via spark-user by Marco Mistroni, 6 months ago
No input paths specified in job > at > org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInpu tFormat.java:201) > at > org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInput Format.java:313)
via spark-user by Raymond Xie, 6 months ago
No input paths specified in job > at > org.apache.hadoop.mapred.FileInputFormat.listStatus( FileInputFormat.java:201) > at > org.apache.hadoop.mapred.FileInputFormat.getSplits( FileInputFormat.java:313)
via spark-user by Marco Mistroni, 6 months ago
No input paths specified in job > at > org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInpu tFormat.java:201) > at > org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInput Format.java:313)
via search-hadoop.com by Unknown author, 1 year ago
No input paths specified in job at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:201) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
via incubator-spark-user by Raymond Xie, 6 months ago
No input paths specified in job > at > org.apache.hadoop.mapred.FileInputFormat.listStatus( FileInputFormat.java:201) > at > org.apache.hadoop.mapred.FileInputFormat.getSplits( FileInputFormat.java:313)
java.io.IOException: Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at scala.Option.getOrElse(Option.scala:120)

Users with the same issue

You are the first who have seen this exception. Write a tip to help other users and build your expert profile.

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.