java.lang.IllegalArgumentException: Wrong FS: file:/tmp/topics/topic/12/5a3b1baf-f9a5-443b-b043-50ea80c7e15d_tmp.avro, expected: hdfs://localhost:9001

GitHub | kusum007 | 6 months ago
  1. 0

    Wrong FS error

    GitHub | 6 months ago | kusum007
    java.lang.IllegalArgumentException: Wrong FS: file:/tmp/topics/topic/12/5a3b1baf-f9a5-443b-b043-50ea80c7e15d_tmp.avro, expected: hdfs://localhost:9001
  2. 0

    Hadoop Yarn job: Wrong FS

    Stack Overflow | 1 year ago | delthom
    java.lang.IllegalArgumentException: Wrong FS: hdfs://var/log/hadoop-yarn, expected: hdfs://cdh-master:8020
  3. 0

    Config not being read on driver and/or executor

    GitHub | 2 years ago | srowen
    java.lang.IllegalArgumentException: Wrong FS: file://xxxxx.cloudera.com:8020/tmp/Oryx/data, expected: hdfs://sssss.cloudera.com:8020
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Hive will not write to aws s3

    Stack Overflow | 1 year ago | NW0428
    java.lang.IllegalArgumentException: Wrong FS: s3a://bucket/folder/.hive-staging_hive_2015-07-06_09-22-10_351_9216807769834089982-3/-ext-10002, expected: hdfs://quickstart.cloudera:8020
  6. 0

    In HA HDFS from uploading a file to streams (order of few MBs) and reading the stream back gives the following exception: {code} java.lang.IllegalArgumentException: Wrong FS: hdfs://prodnameservice1:8020/<PATH>, expected: hdfs://prodnameservice1 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) ~[hadoop-common-2.5.0-cdh5.3.2.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:192) ~[hadoop-hdfs.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:104) ~[hadoop-hdfs.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem$32.doCall(DistributedFileSystem.java:1569) ~[hadoop-hdfs.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem$32.doCall(DistributedFileSystem.java:1565) ~[hadoop-hdfs.jar:na] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-2.5.0-cdh5.3.2.jar:na] at org.apache.hadoop.hdfs.DistributedFileSystem.isFileClosed(DistributedFileSystem.java:1565) ~[hadoop-hdfs.jar:na] at co.cask.cdap.common.io.Locations$9.size(Locations.java:365) ~[co.cask.cdap.cdap-common-3.2.1.jar:na] at co.cask.cdap.common.io.Locations$11.size(Locations.java:406) ~[co.cask.cdap.cdap-common-3.2.1.jar:na] at co.cask.cdap.common.io.DFSSeekableInputStream.size(DFSSeekableInputStream.java:51) ~[co.cask.cdap.cdap-common-3.2.1.jar:na] at co.cask.cdap.data.stream.StreamDataFileReader.createEventTemplate(StreamDataFileReader.java:344) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar :na] at co.cask.cdap.data.stream.StreamDataFileReader.readHeader(StreamDataFileReader.java:305) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.StreamDataFileReader.init(StreamDataFileReader.java:280) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.StreamDataFileReader.doOpen(StreamDataFileReader.java:252) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.StreamDataFileReader.initialize(StreamDataFileReader.java:139) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.LiveStreamFileReader$StreamPositionTransformFileReader.initialize(LiveStreamFileReader.java:169) ~[co.cask.cdap.c dap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.LiveStreamFileReader.renewReader(LiveStreamFileReader.java:81) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.file.LiveFileReader.initialize(LiveFileReader.java:42) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.MultiLiveStreamFileReader$StreamEventSource.initialize(MultiLiveStreamFileReader.java:175) ~[co.cask.cdap.cdap-da ta-fabric-3.2.1.jar:na] at co.cask.cdap.data.stream.MultiLiveStreamFileReader.initialize(MultiLiveStreamFileReader.java:72) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar :na] at co.cask.cdap.data.stream.service.StreamFetchHandler.createReader(StreamFetchHandler.java:286) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na ] at co.cask.cdap.data.stream.service.StreamFetchHandler.fetch(StreamFetchHandler.java:124) ~[co.cask.cdap.cdap-data-fabric-3.2.1.jar:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_67] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_67] {code} This happens because in HA mode the URI returned from org.apache.hadoop.fs.FileContext is not compatible with the one expected by org.apache.hadoop.dfs.DistributedFileSystem. FileContext is not HA aware and always appends the port which DistributedFilesystem uses the logical name. Related HDFS JIRA: https://issues.apache.org/jira/browse/HADOOP-9617 Until the HDFS JIRA is fixed we need a workaround in CDAP to strip out the port if used in HA mode.

    Cask Community Issue Tracker | 12 months ago | Sreevatsan Raman
    java.lang.IllegalArgumentException: Wrong FS: hdfs://prodnameservice1:8020/<PATH>, expected: hdfs://prodnameservice1

    2 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      Wrong FS: file:/tmp/topics/topic/12/5a3b1baf-f9a5-443b-b043-50ea80c7e15d_tmp.avro, expected: hdfs://localhost:9001

      at org.apache.hadoop.fs.FileSystem.checkPath()
    2. Hadoop
      FileSystem.checkPath
      1. org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
      1 frame
    3. Apache Hadoop HDFS
      DistributedFileSystem$18.doCall
      1. org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
      2. org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
      3. org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
      4. org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
      4 frames
    4. Hadoop
      FileSystemLinkResolver.resolve
      1. org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
      1 frame
    5. Apache Hadoop HDFS
      DistributedFileSystem.getFileStatus
      1. org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
      1 frame
    6. Hadoop
      FileSystem.createNewFile
      1. org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
      2. org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1148)
      2 frames
    7. io.confluent.connect
      WALTest.testWALMultiClient
      1. io.confluent.connect.hdfs.wal.WALTest.testWALMultiClient(WALTest.java:54)
      1 frame