org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Java heap space

github.com | 7 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    eco-release-metadata/RELEASENOTES.1.2.0.md at master · aw-was-here/eco-release-metadata · GitHub

    github.com | 7 months ago
    org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Java heap space
  2. 0

    Re: Unable to run hive queries inside spark

    spark-user | 2 years ago | kundan kumar
    org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/user/hive/warehouse/src is not a directory or unable to create one)

    Root Cause Analysis

    1. org.apache.spark.sql.execution.QueryExecutionException

      FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Java heap space

      at org.apache.spark.sql.hive.HiveContext.runHive()
    2. Spark Project Hive
      NativeCommand.sideEffectResult
      1. org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:309)
      2. org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:276)
      3. org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)
      4. org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)
      4 frames
    3. Spark Project SQL
      Command$class.execute
      1. org.apache.spark.sql.execution.Command$class.execute(commands.scala:46)
      1 frame
    4. Spark Project Hive
      NativeCommand.execute
      1. org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:30)
      1 frame
    5. Spark Project SQL
      SchemaRDD.<init>
      1. org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:425)
      2. org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:425)
      3. org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
      4. org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:108)
      4 frames
    6. Spark Project Hive
      HiveContext.sql
      1. org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:94)
      1 frame
    7. org.apache.spark
      SparkExecuteStatementOperation$$anon$1$$anon$2.run
      1. org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$runInternal(Shim13.scala:84)
      2. org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(Shim13.scala:224)
      2 frames
    8. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:415)
      2 frames
    9. Hadoop
      UserGroupInformation.doAs
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
      1 frame
    10. Hive Shims
      HadoopShimsSecure.doAs
      1. org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:493)
      1 frame
    11. org.apache.spark
      SparkExecuteStatementOperation$$anon$1.run
      1. org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(Shim13.scala:234)
      1 frame
    12. Java RT
      Thread.run
      1. java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
      2. java.util.concurrent.FutureTask.run(FutureTask.java:262)
      3. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      4. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      5. java.lang.Thread.run(Thread.java:745)
      5 frames