Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by kyortsos
, 1 year ago
org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext
via GitHub by JoshRosen
, 2 years ago
org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext
java.lang.ClassCastException: org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext	at org.apache.hadoop.mapred.OutputCommitter.setupJob(OutputCommitter.java:146)	at org.apache.spark.sql.execution.datasources.BaseWriterContainer.driverSideSetup(WriterContainer.scala:108)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:147)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)	at com.databricks.spark.redshift.RedshiftWriter.unloadData(RedshiftWriter.scala:323)	at com.databricks.spark.redshift.RedshiftWriter.saveToRedshift(RedshiftWriter.scala:388)	at com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:106)	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)