java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

GitHub | manisha803 | 5 months ago
  1. 0

    Getting Avro error while trying to load data from EMR spark dataframe to redshift table.

    GitHub | 5 months ago | manisha803
    java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
  2. 0

    Error while using spark compiled with hadoop 1.0.4

    GitHub | 10 months ago | cdriesch
    java.lang.IncompatibleClassChangeError: Found class org.apache.hadoop.mapreduce.TaskAttemptContext, but interface was expected
  3. 0

    How does Cloudera CDH4 work with Avro?

    Stack Overflow | 3 years ago | caesar0301
    java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Teradata Connector for Hadoop now available | Teradata Downloads

    teradata.com | 1 month ago
    java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
  6. 0

    Unable to Read/Write Avro RDD on cluster. (YARN cluster)

    Stack Overflow | 2 years ago | deepujain
    java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IncompatibleClassChangeError

      Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

      at org.apache.avro.mapreduce.AvroKeyOutputFormat.getRecordWriter()
    2. Apache Avro Mapred API
      AvroKeyOutputFormat.getRecordWriter
      1. org.apache.avro.mapreduce.AvroKeyOutputFormat.getRecordWriter(AvroKeyOutputFormat.java:85)
      1 frame
    3. com.databricks.spark
      AvroOutputWriterFactory.newInstance
      1. com.databricks.spark.avro.AvroOutputWriter.<init>(AvroOutputWriter.scala:82)
      2. com.databricks.spark.avro.AvroOutputWriterFactory.newInstance(AvroOutputWriterFactory.scala:31)
      2 frames
    4. Spark Project SQL
      InsertIntoHadoopFsRelation$$anonfun$insert$1.apply
      1. org.apache.spark.sql.sources.DefaultWriterContainer.initWriters(commands.scala:470)
      2. org.apache.spark.sql.sources.BaseWriterContainer.executorSideSetup(commands.scala:360)
      3. org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.org$apache$spark$sql$sources$InsertIntoHadoopFsRelation$$writeRows$1(commands.scala:172)
      4. org.apache.spark.sql.sources.InsertIntoHadoopFsRelation$$anonfun$insert$1.apply(commands.scala:160)
      5. org.apache.spark.sql.sources.InsertIntoHadoopFsRelation$$anonfun$insert$1.apply(commands.scala:160)
      5 frames
    5. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
      2. org.apache.spark.scheduler.Task.run(Task.scala:70)
      3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      3 frames