java.io.IOException: Exception during preparation of SELECT "uuid", "person", "age" FROM "test"."users" WHERE token("uuid") > ? AND token("uuid") <= ? AND name = Jane ALLOW FILTERING: line 1:232 no viable alternative at input 'ALLOW' (...<= ? AND name = [Jane] ALLOW...)

Stack Overflow | Mnemosyne | 5 months ago
  1. 0

    Querying cassandra error no viable alternative at input 'ALLOW'

    Stack Overflow | 5 months ago | Mnemosyne
    java.io.IOException: Exception during preparation of SELECT "uuid", "person", "age" FROM "test"."users" WHERE token("uuid") > ? AND token("uuid") <= ? AND name = Jane ALLOW FILTERING: line 1:232 no viable alternative at input 'ALLOW' (...<= ? AND name = [Jane] ALLOW...)
  2. 0

    With connector 1.3.1 (released yesterday), and spark 1.3.1, with Java 8. When I'm creating a RDD using the method: JavaRDD<IndividualBean> javaBeans = javaFunctions(context).cassandraTable(keyspaceName, tableName, factory).select(columnNames); (keyspaceName, tableName, columnNames are non null). The generated request is: {noformat} SELECT FROM "geneticio"."temp_table_049882206ccc11e597070023247237a6" WHERE token("internal_index") > ? AND token("internal_index") <= ? ALLOW FILTERING {noformat} Here, the column names are missing between the select and from clauses, and I get the following exception : {noformat} ERROR and message: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, Julien-Spectre): java.io.IOException: Exception during preparation of SELECT FROM "geneticio"."temp_table_427eba306cb511e58bfae82aea3ab28a" WHERE token("internal_index") > ? AND token("internal_index") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM' at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:188) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:202) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:396) at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12) at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:248) at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:172) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:79) at org.apache.spark.rdd.RDD.iterator(RDD.scala:242) at org.apache.spark.rdd.PartitionwiseSampledRDD.compute(PartitionwiseSampledRDD.scala:68) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) {noformat} (the keyspace, table, and columns exist and the table has proper entries in it). The table is as such: {noformat} CREATE TABLE geneticio.temp_table_678524306cd911e587e30023247237a6( internal_index bigint, internal_score double, id uuid, x0 double, y0 double, x1 double, y1 double, PRIMARY KEY(internal_index)) {noformat} Any idea?

    DataStax JIRA | 1 year ago | julien sebrien
    java.io.IOException: Exception during preparation of SELECT FROM "geneticio"."temp_table_427eba306cb511e58bfae82aea3ab28a" WHERE token("internal_index") > ? AND token("internal_index") <= ? ALLOW FILTERING: line 1:8 no viable alternative at input 'FROM'
  3. 0

    GitHub comment 179#55020117

    GitHub | 2 years ago | kuhnen
    java.io.IOException: Exception during preparation of SELECT "key", "column1", "value" FROM "market"."eventClick" WHERE token("key") > -760284890001294835 AND token("key") <= -722310914540094399 AND column1 >= ? and column1 < ? ALLOW FILTERING: null
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Where arguments not formating correctly.

    GitHub | 2 years ago | mikedanese
    java.io.IOException: Exception during preparation of SELECT "tracking_id", "time", "log" FROM "analytics"."logs" WHERE token("tracking_id") > 5042420539217431504 AND token("tracking_id") <= 5067465046277222981 AND time > ? and time < ? ALLOW FILTERING: null
  6. 0

    GitHub comment 179#53092922

    GitHub | 2 years ago | mikedanese
    java.io.IOException: Exception during preparation of SELECT "tracking_id", "time", "log" FROM "analytics"."logs" WHERE token("tracking_id") > -1151842384040176516 AND token("tracking_id") <= -110474549 3821555775 AND time > ? and time < ? ALLOW FILTERING: assertion failed: List(package lang, package lang)

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Exception during preparation of SELECT "uuid", "person", "age" FROM "test"."users" WHERE token("uuid") > ? AND token("uuid") <= ? AND name = Jane ALLOW FILTERING: line 1:232 no viable alternative at input 'ALLOW' (...<= ? AND name = [Jane] ALLOW...)

      at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement()
    2. spark-cassandra-connector
      CassandraTableScanRDD$$anonfun$18.apply
      1. com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:288)
      2. com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:302)
      3. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$18.apply(CassandraTableScanRDD.scala:328)
      4. com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$18.apply(CassandraTableScanRDD.scala:328)
      4 frames
    3. Scala
      Iterator$$anon$12.hasNext
      1. scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
      2. scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
      2 frames
    4. spark-cassandra-connector
      CountingIterator.hasNext
      1. com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
      1 frame
    5. Scala
      Iterator$class.foreach
      1. scala.collection.Iterator$class.foreach(Iterator.scala:893)
      1 frame
    6. spark-cassandra-connector
      CountingIterator.foreach
      1. com.datastax.spark.connector.util.CountingIterator.foreach(CountingIterator.scala:4)
      1 frame
    7. Scala
      Growable$class.$plus$plus$eq
      1. scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
      1 frame