org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 4, hbase-url): org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "INSERT" at line 1, column 1.

Stack Overflow | D. Müller | 8 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Apache Phoenix: Save DataFrame to HBase table

    Stack Overflow | 8 months ago | D. Müller
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 4, hbase-url): org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "INSERT" at line 1, column 1.

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 4, hbase-url): org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "INSERT" at line 1, column 1.

      at org.apache.phoenix.exception.PhoenixParserException.newException()
    2. Phoenix Core
      PhoenixConnection.prepareStatement
      1. org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
      2. org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
      3. org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1097)
      4. org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1178)
      5. org.apache.phoenix.jdbc.PhoenixPreparedStatement.<init>(PhoenixPreparedStatement.java:95)
      6. org.apache.phoenix.jdbc.PhoenixConnection.prepareStatement(PhoenixConnection.java:622)
      6 frames
    3. org.apache.spark
      JdbcUtils$$anonfun$saveTable$1.apply
      1. org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.insertStatement(JdbcUtils.scala:103)
      2. org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:172)
      3. org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:277)
      4. org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:276)
      4 frames
    4. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$35.apply(RDD.scala:927)
      2. org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$35.apply(RDD.scala:927)
      3. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
      4. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
      5. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      6. org.apache.spark.scheduler.Task.run(Task.scala:89)
      7. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      7 frames
    5. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames