java.io.IOException: No FileSystem for scheme: hdfs

SpringSource Issue Tracker | Qiang Tang | 4 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Reproduce steps: 1,Create cluster with attached DC separate YARN cluster; 2,Run mapreduce job from client; hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 2 1000 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.1.2-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 1000 Expected result: Mapreduce job runs successfully; Actual result: [joe@10 ~]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 2 1000 Number of Maps = 2 Samples per Map = 1000 Wrote input for Map #0 Wrote input for Map #1 Starting Job 13/04/10 06:42:13 INFO input.FileInputFormat: Total input paths to process : 2 13/04/10 06:42:13 INFO mapreduce.JobSubmitter: number of splits:2 13/04/10 06:42:13 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar 13/04/10 06:42:13 WARN conf.Configuration: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 13/04/10 06:42:13 WARN conf.Configuration: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 13/04/10 06:42:13 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 13/04/10 06:42:13 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 13/04/10 06:42:13 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 13/04/10 06:42:14 INFO mapred.ResourceMgrDelegate: Submitted application application_1365573089037_0010 to ResourceManager at /10.111.88.85:8032 13/04/10 06:42:14 INFO mapreduce.Job: The url to track the job: http://10.111.88.85:8088/proxy/application_1365573089037_0010/ 13/04/10 06:42:14 INFO mapreduce.Job: Running job: job_1365573089037_0010 13/04/10 06:42:16 INFO mapreduce.Job: Job job_1365573089037_0010 running in uber mode : false 13/04/10 06:42:16 INFO mapreduce.Job: map 0% reduce 0% 13/04/10 06:42:16 INFO mapreduce.Job: Job job_1365573089037_0010 failed with state FAILED due to: Application application_1365573089037_0010 failed 1 times due to AM Container for appattempt_1365573089037_0010_000001 exited with exitCode: -1000 due to: RemoteTrace: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2206) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2213) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2252) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2234) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:300) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:86) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:49) at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:157) at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:155) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:153) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) at LocalTrace: org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: No FileSystem for scheme: hdfs at org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.convertFromProtoFormat(LocalResourceStatusPBImpl.java:217) at org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.getException(LocalResourceStatusPBImpl.java:147) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.update(ResourceLocalizationService.java:822) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:492) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:221) at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:46) at org.apache.hadoop.yarn.proto.LocalizationProtocol$LocalizationProtocolService$2.callBlockingMethod(LocalizationProtocol.java:57) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) .Failing this attempt.. Failing the application. 13/04/10 06:42:17 INFO mapreduce.Job: Counters: 0 Job Finished in 4.092 seconds java.io.FileNotFoundException: File does not exist: hdfs://10.111.88.85:9000/user/joe/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1704) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728) at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314) at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Thanks, Hong

    SpringSource Issue Tracker | 4 years ago | Qiang Tang
    java.io.IOException: No FileSystem for scheme: hdfs
  2. 0

    Reproduce steps: 1,Create cluster with attached DC separate YARN cluster; 2,Run mapreduce job from client; hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 2 1000 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.1.2-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 1000 Expected result: Mapreduce job runs successfully; Actual result: [joe@10 ~]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 2 1000 Number of Maps = 2 Samples per Map = 1000 Wrote input for Map #0 Wrote input for Map #1 Starting Job 13/04/10 06:42:13 INFO input.FileInputFormat: Total input paths to process : 2 13/04/10 06:42:13 INFO mapreduce.JobSubmitter: number of splits:2 13/04/10 06:42:13 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar 13/04/10 06:42:13 WARN conf.Configuration: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 13/04/10 06:42:13 WARN conf.Configuration: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 13/04/10 06:42:13 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 13/04/10 06:42:13 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 13/04/10 06:42:13 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 13/04/10 06:42:13 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 13/04/10 06:42:13 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 13/04/10 06:42:14 INFO mapred.ResourceMgrDelegate: Submitted application application_1365573089037_0010 to ResourceManager at /10.111.88.85:8032 13/04/10 06:42:14 INFO mapreduce.Job: The url to track the job: http://10.111.88.85:8088/proxy/application_1365573089037_0010/ 13/04/10 06:42:14 INFO mapreduce.Job: Running job: job_1365573089037_0010 13/04/10 06:42:16 INFO mapreduce.Job: Job job_1365573089037_0010 running in uber mode : false 13/04/10 06:42:16 INFO mapreduce.Job: map 0% reduce 0% 13/04/10 06:42:16 INFO mapreduce.Job: Job job_1365573089037_0010 failed with state FAILED due to: Application application_1365573089037_0010 failed 1 times due to AM Container for appattempt_1365573089037_0010_000001 exited with exitCode: -1000 due to: RemoteTrace: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2206) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2213) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2252) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2234) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:300) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:86) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:49) at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:157) at org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:155) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:153) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) at LocalTrace: org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: No FileSystem for scheme: hdfs at org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.convertFromProtoFormat(LocalResourceStatusPBImpl.java:217) at org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.getException(LocalResourceStatusPBImpl.java:147) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.update(ResourceLocalizationService.java:822) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:492) at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:221) at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:46) at org.apache.hadoop.yarn.proto.LocalizationProtocol$LocalizationProtocolService$2.callBlockingMethod(LocalizationProtocol.java:57) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) .Failing this attempt.. Failing the application. 13/04/10 06:42:17 INFO mapreduce.Job: Counters: 0 Job Finished in 4.092 seconds java.io.FileNotFoundException: File does not exist: hdfs://10.111.88.85:9000/user/joe/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1704) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728) at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314) at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Thanks, Hong

    SpringSource Issue Tracker | 4 years ago | Qiang Tang
    java.io.IOException: No FileSystem for scheme: hdfs
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    When I run the Phoenix over my hbase cluster I meet the warning below

    GitHub | 3 years ago | songfj
    java.io.IOException: No FileSystem for scheme: hdfs

  1. tyson925 3 times, last 7 months ago
14 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    No FileSystem for scheme: hdfs

    at org.apache.hadoop.fs.FileSystem.getFileSystemClass()
  2. Hadoop
    Path.getFileSystem
    1. org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2206)
    2. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2213)
    3. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
    4. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2252)
    5. org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2234)
    6. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:300)
    7. org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    7 frames
  3. hadoop-yarn-common
    FSDownload$1.run
    1. org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:86)
    2. org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:49)
    3. org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:157)
    4. org.apache.hadoop.yarn.util.FSDownload$1.run(FSDownload.java:155)
    4 frames
  4. Java RT
    Subject.doAs
    1. java.security.AccessController.doPrivileged(Native Method)
    2. javax.security.auth.Subject.doAs(Subject.java:396)
    2 frames
  5. Hadoop
    UserGroupInformation.doAs
    1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    1 frame
  6. hadoop-yarn-common
    FSDownload.call
    1. org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:153)
    2. org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49)
    2 frames
  7. Java RT
    Thread.run
    1. java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    2. java.util.concurrent.FutureTask.run(FutureTask.java:138)
    3. java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    4. java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    5. java.util.concurrent.FutureTask.run(FutureTask.java:138)
    6. java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    7. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    8. java.lang.Thread.run(Thread.java:662)
    8 frames