Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via apache.org by Unknown author, 2 years ago
Job aborted due to stage failure: Task 0 in stage 7212.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7212.0 (TID 138449, 208.92.52.88): java.lang.IllegalArgumentException: Column [col2] was not found in schema!
via Apache's JIRA Issue Tracker by Dominic Ricard, 1 year ago
Job aborted due to stage failure: Task 0 in stage 7212.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7212.0 (TID 138449, 208.92.52.88): java.lang.IllegalArgumentException: Column [col2] was not found in schema!
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7212.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7212.0 (TID 138449, 208.92.52.88): java.lang.IllegalArgumentException: Column [col2] was not found in schema!	at org.apache.parquet.Preconditions.checkArgument(Preconditions.java:55)	at org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator.getColumnDescriptor(SchemaCompatibilityValidator.java:190)	at org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumn(SchemaCompatibilityValidator.java:178)	at org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumnFilterPredicate(SchemaCompatibilityValidator.java:160)	at org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:94)	at org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:59)	at org.apache.parquet.filter2.predicate.Operators$Eq.accept(Operators.java:180)	at org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator.validate(SchemaCompatibilityValidator.java:64)	at org.apache.parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:59)	at org.apache.parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:40)	at org.apache.parquet.filter2.compat.FilterCompat$FilterPredicateCompat.accept(FilterCompat.java:126)	at org.apache.parquet.filter2.compat.RowGroupFilter.filterRowGroups(RowGroupFilter.java:46)	at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:160)	at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)	at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.(SqlNewHadoopRDD.scala:155)	at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:120)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)	at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)	at org.apache.spark.scheduler.Task.run(Task.scala:88)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)