java.lang.RuntimeException: [1.139] failure: ``union\u0027\u0027 expected but identifier VIEW found\n\nSELECT (90000 + people.index) as id, people.name.text as name, people.bio.text as bio, people.hero.src as img FROM actresses_temp LATERAL VIEW explode(results.people) p AS people\n ^

github.com | 9 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Merge pull request #14 from fluxcapacitor/nlp · fluxcapacitor/pipeline@b6c4fd6 · GitHub

    github.com | 9 months ago
    java.lang.RuntimeException: [1.139] failure: ``union\u0027\u0027 expected but identifier VIEW found\n\nSELECT (90000 + people.index) as id, people.name.text as name, people.bio.text as bio, people.hero.src as img FROM actresses_temp LATERAL VIEW explode(results.people) p AS people\n ^
  2. 0

    Spark SQL: Cross Join with sub-queries

    Stack Overflow | 1 year ago | Timo
    java.lang.RuntimeException: [1.67] failure: ``union'' expected but identifier CROSS found SELECT * FROM ( SELECT obj as origProperty1 FROM a LIMIT 10) tab1 CROSS JOIN ( SELECT obj AS origProperty2 FROM b LIMIT 10) tab2 ^
  3. 0

    generating data fails

    GitHub | 2 years ago | hansbogert
    java.lang.RuntimeException: [6.12] failure: \`\`union'' expected but `by' found DISTRIBUTE BY ^
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    spark sql query with percentage issue

    Stack Overflow | 1 year ago | Carson Pun
    java.lang.RuntimeException: [1.14] failure: ``distinct'' expected but `%' found SELECT score(%) FROM df ^
  6. 0

    {noformat} CassandraPrunedScanSpec: [info] Exception encountered when attempting to run a suite with class name: com.datastax.spark.connector.sql.CassandraPrunedScanSpec *** ABORTED *** (157 milliseconds) [info] java.lang.RuntimeException: [1.2] failure: ``with'' expected but identifier CREATE found [info] [info] CREATE TEMPORARY TABLE tmpTable USING org.apache.spark.sql.cassandra OPTIONS ( c_table "test1", keyspace "sql_ds_test", push_down "false", spark_cassandra_input_page_row_size "10", spark_cassandra_output_consistency_level "ONE", spark_cassandra_connection_timeout_ms "1000", spark_cassandra_connection_host "127.0.0.1" ) [info] ^ {noformat} {noformat} [info] CassandraDataSourceSpec: [info] Exception encountered when attempting to run a suite with class name: com.datastax.spark.connector.sql.CassandraDataSourceSpec *** ABORTED *** (14 milliseconds) [info] java.lang.RuntimeException: [1.2] failure: ``with'' expected but identifier CREATE found [info] [info] CREATE TEMPORARY TABLE tmpTable USING org.apache.spark.sql.cassandra OPTIONS ( c_table "test1", keyspace "sql_ds_test", push_down "true", spark_cassandra_input_page_row_size "10", spark_cassandra_output_consistency_level "ONE", spark_cassandra_connection_timeout_ms "1000", spark_cassandra_connection_host "127.0.0.1" ) [info] ^ [info] at scala.sys.package$.error(package.scala:27) [info] at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36) [info] at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67) [info] at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:145) [info] at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:145) [info] at org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95) [info] at org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:94) [info] at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) [info] at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) [info] at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) [info] at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) [info] at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) {noformat}

    DataStax JIRA | 2 years ago | Piotr Kołaczkowski
    java.lang.RuntimeException: [1.2] failure: ``with'' expected but identifier CREATE found CREATE TEMPORARY TABLE tmpTable USING org.apache.spark.sql.cassandra OPTIONS ( c_table "test1", keyspace "sql_ds_test", push_down "true", spark_cassandra_input_page_row_size "10", spark_cassandra_output_consistency_level "ONE", spark_cassandra_connection_timeout_ms "1000", spark_cassandra_connection_host "127.0.0.1" ) ^
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.RuntimeException

    [1.139] failure: ``union\u0027\u0027 expected but identifier VIEW found\n\nSELECT (90000 + people.index) as id, people.name.text as name, people.bio.text as bio, people.hero.src as img FROM actresses_temp LATERAL VIEW explode(results.people) p AS people\n ^

    at scala.sys.package$.error()
  2. Scala
    package$.error
    1. scala.sys.package$.error(package.scala:27)
    1 frame
  3. Spark Project Catalyst
    DefaultParserDialect.parse
    1. org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36)
    2. org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67)
    2 frames
  4. Spark Project SQL
    SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply
    1. org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:175)
    2. org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:175)
    3. org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:115)
    4. org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)
    4 frames
  5. scala-parser-combinators
    Parsers$$anon$2$$anonfun$apply$14.apply
    1. scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    2. scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    3. scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    4. scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    5. scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    6. scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    7. scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    8. scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    9. scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    10. scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    11. scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    12. scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    13. scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
    13 frames
  6. Scala
    DynamicVariable.withValue
    1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
    1 frame
  7. scala-parser-combinators
    PackratParsers$$anon$1.apply
    1. scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
    2. scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
    2 frames
  8. Spark Project Catalyst
    AbstractSparkSQLParser.parse
    1. org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
    1 frame
  9. Spark Project SQL
    SQLContext$$anonfun$2.apply
    1. org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:172)
    2. org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:172)
    2 frames
  10. org.apache.spark
    DDLParser.parse
    1. org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:42)
    1 frame
  11. Spark Project SQL
    SQLContext.sql
    1. org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:195)
    2. org.apache.spark.sql.SQLContext.sql(SQLContext.scala:725)
    2 frames