java.lang.IllegalArgumentException: Avro schema must be a record.

Google Groups | Grigoriy Roghkov | 4 months ago
  1. 0

    Kafka Connect (HDFS connector)

    Google Groups | 4 months ago | Grigoriy Roghkov
    java.lang.IllegalArgumentException: Avro schema must be a record.
  2. 0

    Avro: convert UNION schema to RECORD schema

    Stack Overflow | 5 months ago | Vitaliy Kotlyarenko
    java.lang.IllegalArgumentException: Avro schema must be a record.
  3. 0

    [DISTRO-659] Does Cloudera Search support other file systems other than HDFS? - Cloudera Open Source

    cloudera.org | 1 year ago
    org.kitesdk.morphline.api.MorphlineRuntimeException: java.lang.IllegalArgumentException: Host must not be null: lustre:/user/solr/indir/sample-statuses-20120906-141433.avro
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    There is some bug either in schema serialization or schema deserialization for records. The following test case fails: {code} @Test public void testCodec() throws UnsupportedTypeException, IOException { ReflectionSchemaGenerator reflectionSchemaGenerator = new ReflectionSchemaGenerator(); Schema schema = reflectionSchemaGenerator.generate(SomeObject.class); Schema.parseJson(schema.toString()); } private static class SomeObject { Record x; OuterRecord y; } private static class Record { int x; } private static class OuterRecord { Record rec; } {code} with an exception: {code} java.lang.IllegalArgumentException: Undefined schema co.cask.cdap.etl.batch.mock.DummyTest$Record at co.cask.cdap.api.data.schema.Schema.resolveSchema(Schema.java:821) at co.cask.cdap.api.data.schema.Schema.resolveSchema(Schema.java:809) at co.cask.cdap.api.data.schema.Schema.populateRecordFields(Schema.java:772) at co.cask.cdap.api.data.schema.Schema.<init>(Schema.java:394) at co.cask.cdap.api.data.schema.Schema.recordOf(Schema.java:332) at co.cask.cdap.internal.io.SchemaTypeAdapter.readRecord(SchemaTypeAdapter.java:218) at co.cask.cdap.internal.io.SchemaTypeAdapter.read(SchemaTypeAdapter.java:109) at co.cask.cdap.internal.io.SchemaTypeAdapter.readUnion(SchemaTypeAdapter.java:133) at co.cask.cdap.internal.io.SchemaTypeAdapter.read(SchemaTypeAdapter.java:88) at co.cask.cdap.internal.io.SchemaTypeAdapter.readInnerSchema(SchemaTypeAdapter.java:234) at co.cask.cdap.internal.io.SchemaTypeAdapter.readRecord(SchemaTypeAdapter.java:214) at co.cask.cdap.internal.io.SchemaTypeAdapter.read(SchemaTypeAdapter.java:109) at co.cask.cdap.internal.io.SchemaTypeAdapter.read(SchemaTypeAdapter.java:62) at co.cask.cdap.internal.io.SchemaTypeAdapter.read(SchemaTypeAdapter.java:48) at com.google.gson.TypeAdapter.fromJson(TypeAdapter.java:256) at com.google.gson.TypeAdapter.fromJson(TypeAdapter.java:269) at co.cask.cdap.api.data.schema.Schema.parseJson(Schema.java:134) at co.cask.cdap.etl.batch.mock.DummyTest.testCodec(DummyTest.java:34) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runner.JUnitCore.run(JUnitCore.java:160) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140) {code}

    Cask Community Issue Tracker | 10 months ago | Albert Shau
    java.lang.IllegalArgumentException: Undefined schema co.cask.cdap.etl.batch.mock.DummyTest$Record
  6. 0

    If you add a value to the row key of the TableSource, then try to delete it, the entry key stays in the pipeline config json and causes trouble when trying to run the pipeline later. {code} 2016-05-11 21:28:34,851 - ERROR [LocalJobRunner Map Task Executor #1:c.c.c.i.a.r.b.MapperWrapper@108] - Failed to initialize mapper with job=phase-1,=namespaceId=default, applicationId=CombineMbrAndAddr1, program=phase-1, runid=fb0e692a-17f9-11e6-88c5-0000007b3b5b java.lang.IllegalArgumentException: Row field must be present in the schema. at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92) ~[com.google.guava.guava-13.0.1.jar:na] at co.cask.hydrator.common.RowRecordTransformer.<init>(RowRecordTransformer.java:39) ~[na:na] at co.cask.hydrator.plugin.batch.source.TableSource.initialize(TableSource.java:78) ~[na:na] at co.cask.hydrator.plugin.batch.source.TableSource.initialize(TableSource.java:42) ~[na:na] at co.cask.cdap.etl.batch.TransformExecutorFactory.getInitializedTransformation(TransformExecutorFactory.java:89) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.etl.batch.TransformExecutorFactory.getTransformation(TransformExecutorFactory.java:51) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.etl.batch.mapreduce.MapReduceTransformExecutorFactory.getTransformation(MapReduceTransformExecutorFactory.java:105) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.etl.batch.TransformExecutorFactory.create(TransformExecutorFactory.java:68) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.etl.batch.mapreduce.TransformRunner.<init>(TransformRunner.java:99) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.etl.batch.mapreduce.ETLMapReduce$ETLMapper.initialize(ETLMapReduce.java:293) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.etl.batch.mapreduce.ETLMapReduce$ETLMapper.initialize(ETLMapReduce.java:279) ~[cdap-etl-batch-3.4.0.jar:na] at co.cask.cdap.internal.app.runtime.batch.MapperWrapper.run(MapperWrapper.java:106) ~[co.cask.cdap.cdap-app-fabric-3.4.0.jar:na] at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) [org.apache.hadoop.hadoop-mapreduce-client-core-2.3.0.jar:na] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) [org.apache.hadoop.hadoop-mapreduce-client-core-2.3.0.jar:na] at org.apache.hadoop.mapred.LocalJobRunnerWithFix$Job$MapTaskRunnable.run(LocalJobRunnerWithFix.java:243) [co.cask.cdap.cdap-app-fabric-3.4.0.jar:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_79] at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_79] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_79] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_79] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79] {code} {code} "plugin": { "name": "Table", "type": "batchsource", "label": "Load Mbr Table", "artifact": { "name": "core-plugins", "version": "1.4.0-SNAPSHOT", "scope": "SYSTEM" }, "properties": { "name": "mbr", "schema": "...", "schema.row.field": "" } } {code}

    Cask Community Issue Tracker | 7 months ago | Russ Savage
    java.lang.IllegalArgumentException: Row field must be present in the schema.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      Avro schema must be a record.

      at org.apache.parquet.avro.AvroSchemaConverter.convert()
    2. org.apache.parquet
      AvroParquetWriter.<init>
      1. org.apache.parquet.avro.AvroSchemaConverter.convert(AvroSchemaConverter.java:89)
      2. org.apache.parquet.avro.AvroParquetWriter.writeSupport(AvroParquetWriter.java:103)
      3. org.apache.parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:47)
      3 frames
    3. io.confluent.connect
      HdfsSinkTask.put
      1. io.confluent.connect.hdfs.parquet.ParquetRecordWriterProvider.getRecordWriter(ParquetRecordWriterProvider.java:51)
      2. io.confluent.connect.hdfs.TopicPartitionWriter.getWriter(TopicPartitionWriter.java:415)
      3. io.confluent.connect.hdfs.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:486)
      4. io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:264)
      5. io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234)
      6. io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:90)
      6 frames
    4. org.apache.kafka
      ShutdownableThread.run
      1. org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:280)
      2. org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:176)
      3. org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
      4. org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
      5. org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
      5 frames