Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via DataStax JIRA by Stephen Qi, 1 year ago
cannot resolve 'ts IN (1466708400000,1466643600000)' due to data type mismatch: Arguments must be same type;
via DataStax JIRA by Stephen Qi, 1 year ago
cannot resolve 'ts IN (1466708400000,1466643600000)' due to data type mismatch: Arguments must be same type;
via Apache's JIRA Issue Tracker by Vincent Warmerdam, 2 years ago
cannot resolve 'avg(date)' due to data type mismatch: function average requires numeric types, not DateType;
via Stack Overflow by user1778293
, 1 year ago
cannot resolve '`features`' given input columns: [Ward, Longitude, X_Coordinate, Beat, Latitude, District, Y_Coordinate, Community_Area];
via Data Science by user1778293
, 1 year ago
cannot resolve '`features`' given input columns: [Ward, Longitude, X_Coordinate, Beat, Latitude, District, Y_Coordinate, Community_Area];
via github.com by Unknown author, 1 year ago
org.apache.spark.sql.AnalysisException: cannot resolve 'ts IN (1466708400000,1466643600000)' due to data type mismatch: Arguments must be same type;	at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:65)	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:334)	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:108)	at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:118)	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:127)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)	at scala.collection.Iterator$class.foreach(Iterator.scala:742)	at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)	at scala.collection.AbstractIterator.to(Iterator.scala:1194)	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)	at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:127)	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:57)	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:121)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)	at scala.collection.immutable.List.foreach(List.scala:381)	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50)	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44)	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)	at org.apache.spark.sql.DataFrame.(DataFrame.scala:133)	at org.apache.spark.sql.cassandra.CassandraSQLContext.cassandraSql(CassandraSQLContext.scala:70)