Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by samelamin
, 1 year ago
Illegal character in: ordercontainer.paymentinfo
via rittmanmead.com by Unknown author, 1 year ago
via GitHub by denis-yuen
, 10 months ago
Illegal character in: draft-3.dev1
via Stack Overflow by Amritendu Panda
, 11 months ago
Illegal initial character: {"type":"record","name":"WS_MESSAGES","fields":[{"name":"message_id","type":"string"},{"name":"action","type":"string"},{"name":"status","type":"string"},{"name":"request","type":"string"},{"name":"response","type":"string"}]}
via Oracle Community by DamianoA, 1 year ago
Illegal character in: SYS$IndexStatsLease
via GitHub by denis-yuen
, 2 years ago
Illegal character in: draft-3.dev1
org.apache.avro.SchemaParseException: Illegal character in: ordercontainer.paymentinfo	at org.apache.avro.Schema.validateName(Schema.java:1083)	at org.apache.avro.Schema.access$200(Schema.java:79)	at org.apache.avro.Schema$Field.(Schema.java:372)	at org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2124)	at org.apache.avro.SchemaBuilder$FieldBuilder.completeField(SchemaBuilder.java:2120)	at org.apache.avro.SchemaBuilder$FieldBuilder.access$5200(SchemaBuilder.java:2034)	at org.apache.avro.SchemaBuilder$FieldDefault.noDefault(SchemaBuilder.java:2146)	at com.databricks.spark.avro.SchemaConverters$$anonfun$convertStructToAvro$1.apply(SchemaConverters.scala:110)	at com.databricks.spark.avro.SchemaConverters$$anonfun$convertStructToAvro$1.apply(SchemaConverters.scala:105)	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)	at com.databricks.spark.avro.SchemaConverters$.convertStructToAvro(SchemaConverters.scala:105)	at com.databricks.spark.avro.AvroRelation.prepareJobForWrite(AvroRelation.scala:83)	at org.apache.spark.sql.execution.datasources.BaseWriterContainer.driverSideSetup(WriterContainer.scala:103)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:147)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)	at com.databricks.spark.redshift.RedshiftWriter.unloadData(RedshiftWriter.scala:278)	at com.databricks.spark.redshift.RedshiftWriter.saveToRedshift(RedshiftWriter.scala:346)	at com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:106)	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)	at undefined.Importer$.main(Importer.scala:46)	at undefined.Importer.main(Importer.scala)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:498)	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)