org.apache.spark.sql.AnalysisException: Try to map struct<_1:struct<i:int,j:boolean>> to Tuple2, but failed as the number of fields does not line up.;

GitHub | imarios | 4 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Bug: Runtime error when selecting a composite field (a field that is also a struct())

    GitHub | 4 months ago | imarios
    org.apache.spark.sql.AnalysisException: Try to map struct<_1:struct<i:int,j:boolean>> to Tuple2, but failed as the number of fields does not line up.;
  2. 0

    Bug: Runtime error when selecting a composite field (a field that is also a struct())

    GitHub | 4 months ago | imarios
    org.apache.spark.sql.AnalysisException: Try to map struct<_1:struct<i:int,j:boolean>> to Tuple2, but failed as the number of fields does not line up.;
  3. 0

    [SPARK-5817] [SQL] Fix bug of udtf with column names by chenghao-intel · Pull Request #4602 · apache/spark · GitHub

    github.com | 7 months ago
    org.apache.spark.sql.AnalysisException: invalid cast from array<struct<_c0:int>> to int;
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    [SPARK-2264] CachedTableSuite SQL Tests are Failing - ASF JIRA

    apache.org | 2 years ago
    java.lang.RuntimeException: Table Not Found: testData at scala.sys. $.error( .scala:27)
  6. 0

    [SPARK-2264] CachedTableSuite SQL Tests are Failing - ASF JIRA

    apache.org | 2 years ago
    java.lang.RuntimeException: Table Not Found: testData at scala.sys. $.error( .scala:27)

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.sql.AnalysisException

      Try to map struct<_1:struct<i:int,j:boolean>> to Tuple2, but failed as the number of fields does not line up.;

      at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveDeserializer$$fail()
    2. Spark Project Catalyst
      RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply
      1. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveDeserializer$$fail(Analyzer.scala:1921)
      2. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveDeserializer$$validateTopLevelTupleFields(Analyzer.scala:1938)
      3. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$$anonfun$apply$32$$anonfun$applyOrElse$12.applyOrElse(Analyzer.scala:1912)
      4. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$$anonfun$apply$32$$anonfun$applyOrElse$12.applyOrElse(Analyzer.scala:1904)
      5. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:279)
      6. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:279)
      7. org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
      8. org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:278)
      9. org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionDown$1(QueryPlan.scala:157)
      10. org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:167)
      11. org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$4.apply(QueryPlan.scala:176)
      12. org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:179)
      13. org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:176)
      14. org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:145)
      15. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$$anonfun$apply$32.applyOrElse(Analyzer.scala:1904)
      16. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$$anonfun$apply$32.applyOrElse(Analyzer.scala:1900)
      17. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
      18. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
      19. org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
      20. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)
      21. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$.apply(Analyzer.scala:1900)
      22. org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveDeserializer$.apply(Analyzer.scala:1899)
      23. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
      24. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
      24 frames
    3. Scala
      List.foldLeft
      1. scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
      2. scala.collection.immutable.List.foldLeft(List.scala:84)
      2 frames
    4. Spark Project Catalyst
      RuleExecutor$$anonfun$execute$1.apply
      1. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
      2. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
      2 frames
    5. Scala
      List.foreach
      1. scala.collection.immutable.List.foreach(List.scala:381)
      1 frame
    6. Spark Project Catalyst
      RuleExecutor.execute
      1. org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
      1 frame
    7. org.apache.spark
      ExpressionEncoder.resolveAndBind
      1. org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.resolveAndBind(ExpressionEncoder.scala:244)
      1 frame
    8. Spark Project SQL
      Dataset.as
      1. org.apache.spark.sql.Dataset.<init>(Dataset.scala:210)
      2. org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
      3. org.apache.spark.sql.Dataset$.apply(Dataset.scala:59)
      4. org.apache.spark.sql.Dataset.as(Dataset.scala:359)
      4 frames
    9. frameless
      TypedDataset.select
      1. frameless.TypedDataset.as(TypedDataset.scala:29)
      2. frameless.TypedDataset.select(TypedDataset.scala:284)
      2 frames