java.lang.IllegalArgumentException: Can not create a Path from an empty string

github.com | 3 months ago
  1. 0

    Branch 1.3 by hxquangnhat · Pull Request #6635 · apache/spark · GitHub

    github.com | 11 months ago
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  2. 0

    French translation by kevinlacire · Pull Request #5440 · apache/spark · GitHub

    github.com | 3 months ago
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  3. 0

    GitHub comment 59#235267656

    GitHub | 4 months ago | DeeeFOX
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    [PIG-755] Difficult to debug parameter substitution problems based on the error messages when running in local mode - ASF JIRA

    apache.org | 11 months ago
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
  6. 0

    Running Spark job in Oozie using Yarn-cluster

    Stack Overflow | 1 year ago | Pangjiu
    java.lang.IllegalArgumentException: Can not create a Path from an empty string

  1. Nikolay Rybak 1 times, last 2 weeks ago
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.IllegalArgumentException

    Can not create a Path from an empty string

    at org.apache.hadoop.fs.Path.checkPathArg()
  2. Hadoop
    StringUtils.stringToPath
    1. org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
    2. org.apache.hadoop.fs.Path.<init>(Path.java:135)
    3. org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:241)
    3 frames
  3. Hadoop
    FileInputFormat.setInputPaths
    1. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
    1 frame
  4. Spark Project Hive
    HadoopTableReader$$anonfun$11.apply
    1. org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:251)
    2. org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
    3. org.apache.spark.sql.hive.HadoopTableReader$$anonfun$11.apply(TableReader.scala:229)
    3 frames
  5. Spark
    HadoopRDD$$anonfun$getJobConf$6.apply
    1. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
    2. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)
    2 frames
  6. Scala
    Option.map
    1. scala.Option.map(Option.scala:145)
    1 frame
  7. Spark
    HadoopRDD.getPartitions
    1. org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:172)
    2. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:196)
    2 frames