Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by blbradley
, 1 year ago
org.apache.avro.AvroRuntimeException: Unknown datum type io.confluent.kafka.serializers.NonRecordContainer: io.confluent.kafka.serializers.NonRecordContainer@31d89133
via GitHub by PrithivirajDamodaran
, 1 year ago
java.lang.ClassCastException: io.confluent.kafka.serializers.NonRecordContainer cannot be cast to java.lang.CharSequence
via Stack Overflow by user1591487
, 2 years ago
java.lang.RuntimeException: Datum 1980-01-01 00:00:00.000 is not in union ["null","long"]
via Stack Overflow by Murali Rao
, 2 years ago
java.lang.RuntimeException: Unsupported type in record:class java.lang.String
via databasefaq.com by Unknown author, 2 years ago
java.lang.RuntimeException: Unsupported type in record:class java.lang.String
via Google Groups by David, 10 months ago
java.lang. ClassCastException: io.confluent.kafka.serializers.NonRecordContainer cannot be cast to org.apache.avro.generic.IndexedRecord
org.apache.avro.AvroRuntimeException: Unknown datum type io.confluent.kafka.serializers.NonRecordContainer: io.confluent.kafka.serializers.NonRecordContainer@31d89133	at org.apache.avro.generic.GenericData.getSchemaName(GenericData.java:636)	at org.apache.avro.generic.GenericData.resolveUnion(GenericData.java:601)	at org.apache.avro.generic.GenericDatumWriter.resolveUnion(GenericDatumWriter.java:151)	at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:71)	at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:58)	at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:290)	at io.confluent.connect.hdfs.avro.AvroRecordWriterProvider$1.write(AvroRecordWriterProvider.java:64)	at io.confluent.connect.hdfs.avro.AvroRecordWriterProvider$1.write(AvroRecordWriterProvider.java:59)	at io.confluent.connect.hdfs.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:487)	at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:264)	at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234)	at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:91)	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:381)	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)	at java.util.concurrent.FutureTask.run(FutureTask.java:266)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)