org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: Lost task 0.0 in stage 12.0 (TID 12, localhost): java.lang.ClassCastException: java.lang.Double cannot be cast to org.apache.spark.ml.linalg.Vector

GitHub | kevinushey | 7 months ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    GitHub comment 225#249287425

    GitHub | 7 months ago | kevinushey
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: Lost task 0.0 in stage 12.0 (TID 12, localhost): java.lang.ClassCastException: java.lang.Double cannot be cast to org.apache.spark.ml.linalg.Vector

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Job aborted due to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: Lost task 0.0 in stage 12.0 (TID 12, localhost): java.lang.ClassCastException: java.lang.Double cannot be cast to org.apache.spark.ml.linalg.Vector

      at org.apache.spark.ml.feature.Normalizer$$anonfun$createTransformFunc$1.apply()
    2. Spark Project ML Library
      Normalizer$$anonfun$createTransformFunc$1.apply
      1. org.apache.spark.ml.feature.Normalizer$$anonfun$createTransformFunc$1.apply(Normalizer.scala:59)
      1 frame
    3. Spark Project Catalyst
      GeneratedClass$GeneratedIterator.processNext
      1. org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
      1 frame
    4. Spark Project SQL
      SparkPlan$$anonfun$4.apply
      1. org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      2. org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
      3. org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
      4. org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
      4 frames
    5. Spark
      RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
      1 frame