java.io.FileNotFoundException: File file:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz does not exist

Stack Overflow | ys0123 | 4 months ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    KiteSdk 1.1.0 csv-import IOError

    Stack Overflow | 4 months ago | ys0123
    java.io.FileNotFoundException: File file:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz does not exist
  2. 0

    Ambari HDP 2.2: Port 8020 Connection refused

    Stack Overflow | 2 years ago | Tanny
    java.io.FileNotFoundException: File does not exist: hdfs://<fqdn for the namenode>:8020/hdp/apps/2.2.6.3-1/mapreduce/mapreduce.tar.gz
  3. 0

    How to submit applications to yarn-cluster so jars in packages are also copied?

    Stack Overflow | 2 years ago | Cody Canning
    java.io.FileNotFoundException: File does not exist: hdfs://172.31.13.205:9000/home/hadoop/.ivy2/jars/spark-csv_2.10.jar
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Error launching spark-submit

    GitHub | 3 months ago | dmcarba
    java.io.FileNotFoundException: File does not exist: hdfs://lambda-pluralsight:9000/spark/spark-assembly-1.6.1-hadoop2.6.0.jar

    Root Cause Analysis

    1. java.io.FileNotFoundException

      File file:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz does not exist

      at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus()
    2. Hadoop
      FileContext.resolvePath
      1. org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
      2. org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
      3. org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
      4. org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:125)
      5. org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:468)
      6. org.apache.hadoop.fs.FilterFs.resolvePath(FilterFs.java:158)
      7. org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2195)
      8. org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2191)
      9. org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
      10. org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2191)
      11. org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:603)
      11 frames
    3. Hadoop
      Job$10.run
      1. org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:457)
      2. org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
      3. org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
      4. org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
      4 frames
    4. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:422)
      2 frames
    5. Hadoop
      UserGroupInformation.doAs
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
      1 frame
    6. Hadoop
      Job.submit
      1. org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
      1 frame
    7. org.apache.crunch
      MRExecutor$1.run
      1. org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(CrunchControlledJob.java:329)
      2. org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204)
      3. org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:238)
      4. org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112)
      5. org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55)
      6. org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83)
      6 frames
    8. Java RT
      Thread.run
      1. java.lang.Thread.run(Thread.java:745)
      1 frame