org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.

DataStax JIRA | Alex Liu | 2 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    SPARKC-260 reported the issue when push down VARINT column filters. {code} CREATE TABLE linkcurrent.time_series_counters_2015_09 ( id int, series varint, rollup_minutes varint, period_stamp timestamp, event_type varint, value counter, PRIMARY KEY ((id, series, rollup_minutes), period_stamp, event_type) ) >>> df = sqlContext.read.format("org.apache.spark.sql.cassandra").load(table="time_series_counters_2015_09", keyspace="linkcurrent") >>> test = df.filter("id = 1 AND series = 0 AND rollup_minutes = 60") >>> test.take(1) WARN 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 16.0 (TID 22, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more ERROR 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 16.0 failed 4 times; aborting job Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 307, in take return self.limit(num).collect() File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 281, in collect port = self._sc._jvm.PythonRDD.collectAndServe(self._jdf.javaToPython().rdd()) File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) {code}

    DataStax JIRA | 2 years ago | Alex Liu
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.
  2. 0

    SPARKC-260 reported the issue when push down VARINT column filters. {code} CREATE TABLE linkcurrent.time_series_counters_2015_09 ( id int, series varint, rollup_minutes varint, period_stamp timestamp, event_type varint, value counter, PRIMARY KEY ((id, series, rollup_minutes), period_stamp, event_type) ) >>> df = sqlContext.read.format("org.apache.spark.sql.cassandra").load(table="time_series_counters_2015_09", keyspace="linkcurrent") >>> test = df.filter("id = 1 AND series = 0 AND rollup_minutes = 60") >>> test.take(1) WARN 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 16.0 (TID 22, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more ERROR 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 16.0 failed 4 times; aborting job Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 307, in take return self.limit(num).collect() File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 281, in collect port = self._sc._jvm.PythonRDD.collectAndServe(self._jdf.javaToPython().rdd()) File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) {code}

    DataStax JIRA | 2 years ago | Alex Liu
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.

    Root Cause Analysis

    1. com.datastax.spark.connector.types.TypeConversionException

      Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.

      at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply()
    2. spark-cassandra-connector
      CassandraTableScanRDD$$anonfun$9.apply
      1. com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42)
      2. com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40)
      3. com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354)
      4. com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40)
      5. com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352)
      6. com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53)
      7. com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352)
      8. com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702)
      9. com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40)
      10. com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695)
      11. com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53)
      12. com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695)
      13. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181)
      14. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180)
      14 frames
    3. Scala
      TraversableLike$WithFilter.map
      1. scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
      2. scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      3. scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      4. scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
      4 frames
    4. spark-cassandra-connector
      CassandraTableScanRDD$$anonfun$13.apply
      1. com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180)
      2. com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202)
      3. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229)
      4. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229)
      4 frames
    5. Scala
      Iterator$$anon$13.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      1 frame
    6. spark-cassandra-connector
      CountingIterator.hasNext
      1. com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
      1 frame
    7. Scala
      Iterator$$anon$11.hasNext
      1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      2. scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
      3. scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
      4. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      4 frames
    8. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207)
      2. org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
      3. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
      4. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      5. org.apache.spark.scheduler.Task.run(Task.scala:70)
      6. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      6 frames
    9. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames