org.apache.avro.SchemaParseException: Can't redefine: io.confluent.connect.avro.Union

GitHub | sanchitgrover | 4 months ago
  1. 0

    GitHub comment 394#237511980

    GitHub | 4 months ago | sanchitgrover
    org.apache.avro.SchemaParseException: Can't redefine: io.confluent.connect.avro.Union
  2. 0

    GitHub comment 69#128264026

    GitHub | 1 year ago | kellrott
    org.apache.avro.SchemaParseException: Can't redefine: SubworkflowFeatureRequirement
  3. 0

    spark-avro fails to save DF with nested records having the same name

    GitHub | 1 year ago | sixers
    org.apache.avro.SchemaParseException: Can't redefine: data
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Avro error with imports in 0.10.1

    GitHub | 6 months ago | mariussoutier
    org.apache.avro.SchemaParseException: Can't redefine: com.example.ImportedType` ~~~
  6. 0

    Pig SchemaParseException: Can't redefine:

    Stack Overflow | 2 years ago | BigDataMiner
    org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1002: Unable to store alias posdata

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.avro.SchemaParseException

      Can't redefine: io.confluent.connect.avro.Union

      at org.apache.avro.Schema$Names.put()
    2. Apache Avro
      DataFileWriter.create
      1. org.apache.avro.Schema$Names.put(Schema.java:1060)
      2. org.apache.avro.Schema$NamedSchema.writeNameRef(Schema.java:509)
      3. org.apache.avro.Schema$RecordSchema.toJson(Schema.java:629)
      4. org.apache.avro.Schema$RecordSchema.fieldsToJson(Schema.java:651)
      5. org.apache.avro.Schema$RecordSchema.toJson(Schema.java:638)
      6. org.apache.avro.Schema.toString(Schema.java:297)
      7. org.apache.avro.Schema.toString(Schema.java:287)
      8. org.apache.avro.file.DataFileWriter.create(DataFileWriter.java:138)
      8 frames
    3. io.confluent.connect
      HdfsSinkTask.put
      1. io.confluent.connect.hdfs.avro.AvroRecordWriterProvider.getRecordWriter(AvroRecordWriterProvider.java:57)
      2. io.confluent.connect.hdfs.TopicPartitionWriter.getWriter(TopicPartitionWriter.java:437)
      3. io.confluent.connect.hdfs.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:508)
      4. io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:289)
      5. io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234)
      6. io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:103)
      6 frames
    4. org.apache.kafka
      WorkerTask.run
      1. org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:384)
      2. org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:228)
      3. org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:171)
      4. org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:143)
      5. org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
      6. org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
      6 frames
    5. Java RT
      ThreadPoolExecutor$Worker.run
      1. java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      2. java.util.concurrent.FutureTask.run(FutureTask.java:266)
      3. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      4. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      4 frames