Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via DataStax JIRA by JV, 1 year ago
Partition key predicate must include all partition key columns. Missing columns: environment,timeslicesecond
via DataStax JIRA by JV, 1 year ago
Partition key predicate must include all partition key columns. Missing columns: environment,timeslicesecond
java.lang.UnsupportedOperationException: Partition key predicate must include all partition key columns. Missing columns: environment,timeslicesecond	at com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner.containsPartitionKey(CassandraRDDPartitioner.scala:112)	at com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner.partitions(CassandraRDDPartitioner.scala:130)	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:145)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:193)	at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)	at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)	at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)	at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)	at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385)	at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1315)	at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1378)	at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178)	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:402)	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:363)	at org.apache.spark.sql.DataFrame.show(DataFrame.scala:371)