Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    via playframework.com by Unknown author

    Add guice to the dependencies.

    libraryDependencies += guice
    

Solutions on the web

via github.com by Unknown author, 1 year ago
[1.139] failure: ``union\u0027\u0027 expected but identifier VIEW found\n\nSELECT (90000 + people.index) as id, people.name.text as name, people.bio.text as bio, people.hero.src as img FROM actresses_temp LATERAL VIEW explode(results.people) p AS
via Stack Overflow by Carson Pun
, 2 years ago
[1.14] failure: ``distinct'' expected but `%' found SELECT score(%) FROM df ^
via Stack Overflow by Timo
, 2 years ago
[1.67] failure: ``union'' expected but identifier CROSS found SELECT * FROM ( SELECT obj as origProperty1 FROM a LIMIT 10) tab1 CROSS JOIN ( SELECT obj AS origProperty2 FROM b LIMIT 10) tab2 ^
via GitHub by hansbogert
, 2 years ago
[6.12] failure: \`\`union'' expected but `by' found DISTRIBUTE BY ^
via Stack Overflow by Green
, 1 year ago
[1.9] failure: ``union'' expected but `right' found select right(Phone_number,4) from mytable1 ^
via GitHub by highwater
, 2 years ago
Couldn't find class name corresponding to any of List(Lcom/amazon/geo/maps/MapView) in class hierarchy
java.lang.RuntimeException: [1.139] failure: ``union\u0027\u0027 expected but identifier VIEW found\n\nSELECT (90000 + people.index) as id, people.name.text as name, people.bio.text as bio, people.hero.src as img FROM actresses_temp LATERAL VIEW explode(results.people) p AS people\n                                                                                                                                          ^	at scala.sys.package$.error(package.scala:27)	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36)	at org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67)	at org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:175)	at org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:175)	at org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:115)	at org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114)	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)	at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)	at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)	at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)	at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)	at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)	at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)	at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)	at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:172)	at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:172)	at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:42)	at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:195)	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:725)