java.lang.ClassCastException: org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext

GitHub | kyortsos | 7 months ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    Issues setting DirectOutputCommitter for RedshiftWriter

    GitHub | 7 months ago | kyortsos
    java.lang.ClassCastException: org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext
  2. 0

    GitHub comment 75#138655740

    GitHub | 1 year ago | JoshRosen
    java.lang.ClassCastException: org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext

    Root Cause Analysis

    1. java.lang.ClassCastException

      org.apache.hadoop.mapreduce.Job cannot be cast to org.apache.hadoop.mapred.JobContext

      at org.apache.hadoop.mapred.OutputCommitter.setupJob()
    2. Hadoop
      OutputCommitter.setupJob
      1. org.apache.hadoop.mapred.OutputCommitter.setupJob(OutputCommitter.java:146)
      1 frame
    3. org.apache.spark
      InsertIntoHadoopFsRelation$$anonfun$run$1.apply
      1. org.apache.spark.sql.execution.datasources.BaseWriterContainer.driverSideSetup(WriterContainer.scala:108)
      2. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:147)
      3. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
      4. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
      4 frames
    4. Spark Project SQL
      SQLExecution$.withNewExecutionId
      1. org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
      1 frame
    5. org.apache.spark
      InsertIntoHadoopFsRelation.run
      1. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
      1 frame
    6. Spark Project SQL
      SparkPlan$$anonfun$execute$5.apply
      1. org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
      2. org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
      3. org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
      4. org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
      5. org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
      5 frames
    7. Spark
      RDDOperationScope$.withScope
      1. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
      1 frame
    8. Spark Project SQL
      QueryExecution.toRdd
      1. org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
      2. org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
      3. org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
      3 frames
    9. org.apache.spark
      ResolvedDataSource$.apply
      1. org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
      1 frame
    10. Spark Project SQL
      DataFrameWriter.save
      1. org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
      2. org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
      2 frames
    11. com.databricks.spark
      DefaultSource.createRelation
      1. com.databricks.spark.redshift.RedshiftWriter.unloadData(RedshiftWriter.scala:323)
      2. com.databricks.spark.redshift.RedshiftWriter.saveToRedshift(RedshiftWriter.scala:388)
      3. com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:106)
      3 frames
    12. org.apache.spark
      ResolvedDataSource$.apply
      1. org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
      1 frame
    13. Spark Project SQL
      DataFrameWriter.save
      1. org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
      1 frame