Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via github.com by Unknown author, 1 year ago
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Java heap space
via spark-user by kundan kumar, 1 year ago
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/user/hive/warehouse/src is not a directory or unable to create one)
via apache.org by Unknown author, 2 years ago
FAILED: SemanticException [Error 10072]: Database does not exist: default
via segmentfault.com by Unknown author, 2 years ago
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
via iswwwup.com by Unknown author, 2 years ago
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: java.net.UnknownHostException: 3f8c07a0e645)
via Stack Overflow by Kousik Kumar Gopalan
, 2 years ago
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: java.net.UnknownHostException: 3f8c07a0e645)
org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Java heap space	at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:309)	at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:276)	at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult$lzycompute(NativeCommand.scala:35)	at org.apache.spark.sql.hive.execution.NativeCommand.sideEffectResult(NativeCommand.scala:35)	at org.apache.spark.sql.execution.Command$class.execute(commands.scala:46)	at org.apache.spark.sql.hive.execution.NativeCommand.execute(NativeCommand.scala:30)	at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:425)	at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:425)	at org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)	at org.apache.spark.sql.SchemaRDD.(SchemaRDD.scala:108)	at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:94)	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$runInternal(Shim13.scala:84)	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(Shim13.scala:224)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:415)	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)	at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:493)	at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(Shim13.scala:234)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)	at java.util.concurrent.FutureTask.run(FutureTask.java:262)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)