java.lang.IllegalArgumentException: Can not create a Path from an empty string

github.com | 4 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Branch 1.3 by hxquangnhat · Pull Request #6635 · apache/spark · GitHub

    github.com | 1 year ago
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  2. 0

    French translation by kevinlacire · Pull Request #5440 · apache/spark · GitHub

    github.com | 4 months ago
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  3. 0

    Loading nested csv files from S3 with Spark

    Stack Overflow | 4 months ago | Nathan Case
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Can't load multiple files in nested directories from S3

    GitHub | 4 months ago | Nath5
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  6. 0

    GitHub comment 1099#103358217

    GitHub | 2 years ago | allixender
    java.lang.IllegalArgumentException: Can not create a Path from an empty string

Root Cause Analysis

  1. java.lang.IllegalArgumentException

    Can not create a Path from an empty string

    at org.apache.hadoop.fs.Path.checkPathArg()
  2. Hadoop
    StringUtils.stringToPath
    1. org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
    2. org.apache.hadoop.fs.Path.<init>(Path.java:135)
    3. org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:241)
    3 frames
  3. Hadoop
    FileInputFormat.setInputPaths
    1. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
    1 frame
  4. Spark Project Hive
    HadoopTableReader$$anonfun$11.apply
    1. org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:251)
    2. org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
    3. org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
    3 frames
  5. Spark
    HadoopRDD$$anonfun$getJobConf$6.apply
    1. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
    2. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
    2 frames
  6. Scala
    Option.map
    1. scala.Option.map(Option.scala:145)
    1 frame
  7. Spark
    HadoopRDD.getPartitions
    1. org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:172)
    2. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:196)
    2 frames