org.apache.spark.sql.AnalysisException: cannot resolve 'my_column[16001]' due to data type mismatch: argument 2 requires bigint type, however, '16001' is of int type.; line 1 pos 43

Stack Overflow | Sai Krishna | 4 months ago
  1. 0

    BIGINT and INT comparison failure in spark sql

    Stack Overflow | 4 months ago | Sai Krishna
    org.apache.spark.sql.AnalysisException: cannot resolve 'my_column[16001]' due to data type mismatch: argument 2 requires bigint type, however, '16001' is of int type.; line 1 pos 43
  2. 0

    ERROR server.TThreadPoolServer: Error occurred during processing of message

    Stack Overflow | 11 months ago | Techie
    org.apache.spark.sql.AnalysisException: no such table service; line 1 pos 14
  3. 0

    GitHub comment 624#173841576

    GitHub | 11 months ago | nsphung
    org.apache.spark.sql.AnalysisException: cannot resolve 'UDF(date)' due to data type mismatch: argument 1 requires string type, however, 'date' is of array<timestamp> type.;
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark ML indexer cannot resolve DataFrame column name with dots?

    Stack Overflow | 11 months ago | Joshua Taylor
    org.apache.spark.sql.AnalysisException: cannot resolve 'a.b' given input columns a.b;
  6. 0

    ERROR server.TThreadPoolServer: Error occurred during processing of message

    spark-user | 11 months ago | Dasun Hegoda
    org.apache.spark.sql.AnalysisException: no such table service; line 1 pos 14

  1. harshg 1 times, last 8 months ago
4 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.sql.AnalysisException

    cannot resolve 'my_column[16001]' due to data type mismatch: argument 2 requires bigint type, however, '16001' is of int type.; line 1 pos 43

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis()
  2. Spark Project Catalyst
    TreeNode$$anonfun$4.apply
    1. org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    2. org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:65)
    3. org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57)
    4. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)
    5. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:335)
    6. org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
    7. org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:334)
    8. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:332)
    9. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:332)
    10. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:281)
    10 frames
  3. Scala
    ArrayBuffer.$plus$plus$eq
    1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    2. scala.collection.Iterator$class.foreach(Iterator.scala:727)
    3. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    4. scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    5. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    6. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    6 frames