org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.

DataStax JIRA | Alex Liu | 1 year ago
  1. 0

    SPARKC-260 reported the issue when push down VARINT column filters. {code} CREATE TABLE linkcurrent.time_series_counters_2015_09 ( id int, series varint, rollup_minutes varint, period_stamp timestamp, event_type varint, value counter, PRIMARY KEY ((id, series, rollup_minutes), period_stamp, event_type) ) >>> df = sqlContext.read.format("org.apache.spark.sql.cassandra").load(table="time_series_counters_2015_09", keyspace="linkcurrent") >>> test = df.filter("id = 1 AND series = 0 AND rollup_minutes = 60") >>> test.take(1) WARN 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 16.0 (TID 22, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more ERROR 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 16.0 failed 4 times; aborting job Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 307, in take return self.limit(num).collect() File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 281, in collect port = self._sc._jvm.PythonRDD.collectAndServe(self._jdf.javaToPython().rdd()) File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) {code}

    DataStax JIRA | 1 year ago | Alex Liu
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.
  2. 0

    SPARKC-260 reported the issue when push down VARINT column filters. {code} CREATE TABLE linkcurrent.time_series_counters_2015_09 ( id int, series varint, rollup_minutes varint, period_stamp timestamp, event_type varint, value counter, PRIMARY KEY ((id, series, rollup_minutes), period_stamp, event_type) ) >>> df = sqlContext.read.format("org.apache.spark.sql.cassandra").load(table="time_series_counters_2015_09", keyspace="linkcurrent") >>> test = df.filter("id = 1 AND series = 0 AND rollup_minutes = 60") >>> test.take(1) WARN 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 16.0 (TID 22, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more ERROR 2015-10-06 21:19:23 org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 16.0 failed 4 times; aborting job Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 307, in take return self.limit(num).collect() File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/pyspark/sql/dataframe.py", line 281, in collect port = self._sc._jvm.PythonRDD.collectAndServe(self._jdf.javaToPython().rdd()) File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/Users/sebastianestevez/Documents/dse-4.8.0/resources/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180) at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180) ... 18 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) {code}

    DataStax JIRA | 1 year ago | Alex Liu
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 16.0 failed 4 times, most recent failure: Lost task 0.3 in stage 16.0 (TID 25, 127.0.0.1): java.io.IOException: Exception during preparation of SELECT "id", "series", "rollup_minutes", "period_stamp", "event_type", "value" FROM "linkcurrent"."time_series_counters_2015_09" WHERE "id" = ? AND "series" = ? AND "rollup_minutes" = ? ALLOW FILTERING: Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.
  3. 0

    My Cassandra Table is: {code:java} CREATE TABLE keyspace.wish_counter ( wish_date date, wish_published_time timeuuid, wish_counter_value counter, PRIMARY KEY (wish_date, wish_published_time) ) WITH CLUSTERING ORDER BY (wish_published_time ASC) {code} I'm loading data from Cassandra into a class 'WishCountTable' : {code:java} class WishCountTable extends Serializable { var wish_date: DateTime = new DateTime(0) var wish_published_time: UUID = new UUID(0L, 0L) var wish_counter_value: Long = 0L } {code} Everything is alright but whenever I try to save data into cassandra, I get an error. {code:java} saveRDD.saveToCassandra(keyspace, "wish_counter") {code} h4. ERROR: {code:java} 16/03/07 19:18:08 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 5) com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 16/03/07 19:18:08 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 5, localhost): com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912) at com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:37) at org.qm.UpdateWishTable$.main(UpdateWishTable.scala:93) at org.qm.UpdateWishTable.main(UpdateWishTable.scala) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 16/03/07 19:18:08 ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 6) com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-28T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code}

    DataStax JIRA | 9 months ago | Safat Siddiqui
    com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    My Cassandra Table is: {code:java} CREATE TABLE keyspace.wish_counter ( wish_date date, wish_published_time timeuuid, wish_counter_value counter, PRIMARY KEY (wish_date, wish_published_time) ) WITH CLUSTERING ORDER BY (wish_published_time ASC) {code} I'm loading data from Cassandra into a class 'WishCountTable' : {code:java} class WishCountTable extends Serializable { var wish_date: DateTime = new DateTime(0) var wish_published_time: UUID = new UUID(0L, 0L) var wish_counter_value: Long = 0L } {code} Everything is alright but whenever I try to save data into cassandra, I get an error. {code:java} saveRDD.saveToCassandra(keyspace, "wish_counter") {code} h4. ERROR: {code:java} 16/03/07 19:18:08 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 5) com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 16/03/07 19:18:08 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 5, localhost): com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912) at com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:37) at org.qm.UpdateWishTable$.main(UpdateWishTable.scala:93) at org.qm.UpdateWishTable.main(UpdateWishTable.scala) Caused by: com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-21T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 16/03/07 19:18:08 ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 6) com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-28T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate. at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:45) at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$$anonfun$convertPF$20.applyOrElse(TypeConverter.scala:447) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$LocalDateConverter$.convert(TypeConverter.scala:437) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$28.applyOrElse(TypeConverter.scala:756) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:749) at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56) at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:749) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1$$anonfun$applyOrElse$1.apply$mcVI$sp(MappedToGettableDataConverter.scala:170) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1$$anonfun$convertPF$1.applyOrElse(MappedToGettableDataConverter.scala:169) at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43) at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.convert(MappedToGettableDataConverter.scala:18) at com.datastax.spark.connector.writer.DefaultRowWriter.readColumnValues(DefaultRowWriter.scala:21) at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:155) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:139) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:139) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code}

    DataStax JIRA | 9 months ago | Safat Siddiqui
    com.datastax.spark.connector.types.TypeConversionException: Cannot convert object 2016-02-28T06:00:00.000+06:00 of type class org.joda.time.DateTime to com.datastax.driver.core.LocalDate.
  6. 0

    Store Enum into Cassandra as Integer

    Stack Overflow | 12 months ago | davideanastasia
    com.datastax.spark.connector.types.TypeConversionException: Cannot convert object FRI of type class WeekDay$FRI$ to java.lang.Integer.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. com.datastax.spark.connector.types.TypeConversionException

      Cannot convert object 0.0 of type class org.apache.spark.sql.types.Decimal to java.math.BigInteger.

      at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply()
    2. spark-cassandra-connector
      CassandraTableScanRDD$$anonfun$9.apply
      1. com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:42)
      2. com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40)
      3. com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$$anonfun$convertPF$15.applyOrElse(TypeConverter.scala:354)
      4. com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40)
      5. com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:352)
      6. com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53)
      7. com.datastax.spark.connector.types.TypeConverter$JavaBigIntegerConverter$.convert(TypeConverter.scala:352)
      8. com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$26.applyOrElse(TypeConverter.scala:702)
      9. com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40)
      10. com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:695)
      11. com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:53)
      12. com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:695)
      13. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:181)
      14. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$9.apply(CassandraTableScanRDD.scala:180)
      14 frames
    3. Scala
      TraversableLike$WithFilter.map
      1. scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
      2. scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      3. scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      4. scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
      4 frames
    4. spark-cassandra-connector
      CassandraTableScanRDD$$anonfun$13.apply
      1. com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:180)
      2. com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202)
      3. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229)
      4. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229)
      4 frames
    5. Scala
      Iterator$$anon$13.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      1 frame
    6. spark-cassandra-connector
      CountingIterator.hasNext
      1. com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
      1 frame
    7. Scala
      Iterator$$anon$11.hasNext
      1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      2. scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
      3. scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
      4. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      4 frames
    8. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:207)
      2. org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
      3. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
      4. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      5. org.apache.spark.scheduler.Task.run(Task.scala:70)
      6. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      6 frames
    9. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames