scala.MatchError: List(personObject1, personObject2, personObject3) (of class scala.collection.immutable.$colon$colon)

yuluer.com | 4 months ago
  1. 0

    Spark: convert rdd[row] to dataframe where one of the columns in the row is a list

    Stack Overflow | 6 months ago | John Engelhart
    scala.MatchError: List(personObject1, personObject2, personObject3) (of class scala.collection.immutable.$colon$colon)
  2. 0

    Spark: convert rdd[row] to dataframe where one of the columns in the row is a list - yuluer.com

    yuluer.com | 4 months ago
    scala.MatchError: List(personObject1, personObject2, personObject3) (of class scala.collection.immutable.$colon$colon)
  3. 0

    GitHub comment 76#134334843

    GitHub | 1 year ago | JoshRosen
    scala.MatchError: 3.14 (of class java.lang.Double)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 76#150121590

    GitHub | 1 year ago | findchris
    scala.MatchError: 1.000000000000 (of class java.lang.String)
  6. 0

    GitHub comment 644#169340991

    GitHub | 11 months ago | costin
    scala.MatchError: Buffer() (of class scala.collection.convert.Wrappers$JListWrapper)

  1. tyson925 4 times, last 2 months ago
  2. tyson925 16 times, last 3 months ago
8 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. scala.MatchError

    List(personObject1, personObject2, personObject3) (of class scala.collection.immutable.$colon$colon)

    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl()
  2. Spark Project Catalyst
    CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply
    1. org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
    2. org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:294)
    3. org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    4. org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
    5. org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
    6. org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    7. org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
    7 frames
  3. Spark Project SQL
    SQLContext$$anonfun$7.apply
    1. org.apache.spark.sql.SQLContext$$anonfun$7.apply(SQLContext.scala:445)
    2. org.apache.spark.sql.SQLContext$$anonfun$7.apply(SQLContext.scala:445)
    2 frames
  4. Scala
    Iterator$$anon$11.next
    1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    2. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    3. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    3 frames
  5. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:219)
    2. org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
    3. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    4. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    5. org.apache.spark.scheduler.Task.run(Task.scala:88)
    6. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    6 frames
  6. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames