org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input slice for: har://hdfs-namenode/user/tsz/t20.har/t20

Apache's JIRA Issue Tracker | Tsz Wo Nicholas Sze | 7 years ago
  1. 0

    java-hadoop-pig-devel - Reopened: (PIG-1194) ERROR 2055: Received Error while processing the map plan - msg#00265 - Recent Discussion OSDir.com

    osdir.com | 1 year ago
    org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Unable to create input slice for: har://hdfs-namenode/user/tsz/t20.har/t20
  2. 0

    "Wrong FS" error using MongoStorage with Pig

    Google Groups | 6 years ago | Andrea Leistra
    java.lang.IllegalArgumentException: Wrong FS: mongodb://usredhdp00-priv/home/CONCUR/andreal/mongodata/test.pig.output, expected: hdfs://usredhdp00-priv
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Difficulty in use Hbase fully distributed mode

    Google Groups | 5 years ago | shanmuganathan
    java.lang.IllegalArgumentException: Wrong FS: hdfs://192.168.107.142:54310/hbase, expected: hdfs://shanmuganathanr.zohocorpin.com:54310
  5. 0

    Crunch with Elastic MapReduce

    incubator-crunch-user | 4 years ago | Shawn Smith
    java.lang.IllegalArgumentException: This file system object (hdfs://10.114.37.65:9000) does not support access to the request path 's3://test-bucket/test/Input.avro' You possibly called FileSystem.get(conf) when you should have called FileSystem.get(uri, conf) to obtain a file system supporting your path.

    2 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      Wrong FS: har://hdfs-namenode/user/tsz/t20.har/t20, expected: hdfs://namenode

      at org.apache.hadoop.fs.FileSystem.checkPath()
    2. Hadoop
      FileSystem.checkPath
      1. org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
      1 frame
    3. Apache Hadoop HDFS
      DistributedFileSystem.getFileStatus
      1. org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
      2. org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
      3. org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
      3 frames
    4. Hadoop
      FileSystem.exists
      1. org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
      1 frame
    5. org.apache.pig
      PigInputFormat.getSplits
      1. org.apache.pig.backend.hadoop.datastorage.HDataStorage.isContainer(HDataStorage.java:203)
      2. org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:131)
      3. org.apache.pig.impl.io.FileLocalizer.fileExists(FileLocalizer.java:553)
      4. org.apache.pig.backend.executionengine.PigSlicer.validate(PigSlicer.java:123)
      5. org.apache.pig.impl.io.ValidatingInputFileSpec.validate(ValidatingInputFileSpec.java:59)
      6. org.apache.pig.impl.io.ValidatingInputFileSpec.<init>(ValidatingInputFileSpec.java:44)
      7. org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:240)
      7 frames
    6. Hadoop
      JobControl.run
      1. org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:810)
      2. org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:781)
      3. org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
      4. org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378)
      5. org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247)
      6. org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279)
      6 frames
    7. Java RT
      Thread.run
      1. java.lang.Thread.run(Thread.java:619)
      1 frame