java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/files/project/spark-warehouse

Stack Overflow | rroschin | 4 months ago
  1. 0

    Apache Spark MLlib with DataFrame API gives java.net.URISyntaxException when createDataFrame() or read().csv(...)

    Stack Overflow | 4 months ago | rroschin
    java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/files/project/spark-warehouse
  2. 0

    Not able to load file from HDFS in spark Dataframe

    Stack Overflow | 4 months ago | Aiden
    java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:C:/test/sampleApp/spark-warehouse
  3. 0

    Hive derby/mysql installation

    Stack Overflow | 2 years ago | Raghuveer
    java.lang.RuntimeException: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    [SPARK-6300][Spark Core] sc.addFile(path) does not support the relative path. by DoingDone9 · Pull Request #4993 · apache/spark · GitHub

    github.com | 1 year ago
    java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:../test.txt
  6. 0

    [SPARK-XYZ]: Update the Maven Profile to use Hadoop 2.4.0 by berngp · Pull Request #1 · Guavus/spark · GitHub

    github.com | 3 months ago
    java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:../test.txt

  1. tyson925 1 times, last 6 months ago
4 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.IllegalArgumentException

    java.net.URISyntaxException: Relative path in absolute URI: file:C:/files/project/spark-warehouse

    at org.apache.hadoop.fs.Path.initialize()
  2. Hadoop
    Path.<init>
    1. org.apache.hadoop.fs.Path.initialize(Path.java:206)
    2. org.apache.hadoop.fs.Path.<init>(Path.java:172)
    2 frames
  3. org.apache.spark
    SessionState.analyzer
    1. org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:114)
    2. org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145)
    3. org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
    4. org.apache.spark.sql.internal.SessionState.catalog$lzycompute(SessionState.scala:95)
    5. org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95)
    6. org.apache.spark.sql.internal.SessionState$$anon$1.<init>(SessionState.scala:112)
    7. org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:112)
    8. org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:111)
    8 frames
  4. Spark Project SQL
    SparkSession.createDataFrame
    1. org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
    2. org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
    3. org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:373)
    3 frames