Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via DataStax JIRA by Yana Kadiyska, 1 year ago
assertion failed: No plan for CassandraRelation TableDef(yana_test,test1,ArrayBuffer(ColumnDef(customer_id,PartitionKeyColumn,IntType,false)),ArrayBuffer(ColumnDef(epoch,ClusteringColumn(0),BigIntType,false), ColumnDef(uri,ClusteringColumn(1),VarCharType,false)),ArrayBuffer(ColumnDef(browser,RegularColumn,VarCharType,false))), None, None
via DataStax JIRA by Yana Kadiyska, 1 year ago
assertion failed: No plan for CassandraRelation TableDef(yana_test,test1,ArrayBuffer(ColumnDef(customer_id,PartitionKeyColumn,IntType,false)),ArrayBuffer(ColumnDef(epoch,ClusteringColumn(0),BigIntType,false), ColumnDef(uri,ClusteringColumn(1),VarCharType,false)),ArrayBuffer(ColumnDef(browser,RegularColumn,VarCharType,false))), None, None
via GitHub by bvenners
, 2 years ago
assertion failed: unreachable condition - maybe time went backwards?!
via GitHub by kelvl
, 2 years ago
assertion failed: unreachable condition - maybe time went backwards?!
java.lang.AssertionError: assertion failed: No plan for CassandraRelation TableDef(yana_test,test1,ArrayBuffer(ColumnDef(customer_id,PartitionKeyColumn,IntType,false)),ArrayBuffer(ColumnDef(epoch,ClusteringColumn(0),BigIntType,false), ColumnDef(uri,ClusteringColumn(1),VarCharType,false)),ArrayBuffer(ColumnDef(browser,RegularColumn,VarCharType,false))), None, None	at scala.Predef$.assert(Predef.scala:179)	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)	at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)	at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:300)	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)	at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)	at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$$anonfun$19.apply(SparkStrategies.scala:316)	at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$$anonfun$19.apply(SparkStrategies.scala:316)