java.lang.UnsupportedOperationException: Cannot evaluate expression: PythonUDF#<lambda>(input[2, StringType])

Stack Overflow | user5147250 | 5 months ago
  1. 0

    Spark-Submit python file on cluster

    Stack Overflow | 5 months ago | user5147250
    java.lang.UnsupportedOperationException: Cannot evaluate expression: PythonUDF#<lambda>(input[2, StringType])
  2. 0

    Spark-Submit python file on cluster

    Stack Overflow | 5 months ago | user5147250
    org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/ephemeral/usr/hdp/2.3.4.33-1/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main command = pickleSer._read_with_length(infile) File "/ephemeral/usr/hdp/2.3.4.33-1/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 156, in _read_with_length length = read_int(stream) File "/ephemeral/usr/hdp/2.3.4.33-1/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 545, in read_int raise EOFError EOFError
  3. 0

    Spark-Submit python file on cluster

    Stack Overflow | 5 months ago | user5147250
    java.lang.UnsupportedOperationException: Cannot evaluate expression: PythonUDF#<lambda>(input[2, StringType])
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    UnsupportedOperationException: Cannot evalute expression: .. when adding new column withColumn() and udf()

    Stack Overflow | 2 months ago | aks
    java.lang.UnsupportedOperationException: Cannot evaluate expression: parse_df_to_string(input[1, int, true], input[2, int, true], input[3, int, true], input[4, int, true], input[5, int, true])
  6. 0

    python+pyspark: error on inner join with multiple column comparison in pyspark

    Stack Overflow | 2 months ago | Satya
    java.lang.UnsupportedOperationException: Cannot evaluate expression: count(1)

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.UnsupportedOperationException

      Cannot evaluate expression: PythonUDF#<lambda>(input[2, StringType])

      at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.genCode()
    2. Spark Project Catalyst
      Unevaluable$class.genCode
      1. org.apache.spark.sql.catalyst.expressions.Unevaluable$class.genCode(Expression.scala:191)
      1 frame
    3. Spark Project SQL
      PythonUDF.genCode
      1. org.apache.spark.sql.execution.PythonUDF.genCode(python.scala:44)
      1 frame
    4. Spark Project Catalyst
      GenerateMutableProjection$$anonfun$1.apply
      1. org.apache.spark.sql.catalyst.expressions.Expression.gen(Expression.scala:98)
      2. org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$1.apply(GenerateMutableProjection.scala:46)
      3. org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$1.apply(GenerateMutableProjection.scala:43)
      3 frames
    5. Scala
      AbstractTraversable.map
      1. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      2. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      3. scala.collection.immutable.List.foreach(List.scala:318)
      4. scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
      5. scala.collection.AbstractTraversable.map(Traversable.scala:105)
      5 frames
    6. Spark Project Catalyst
      CodeGenerator.generate
      1. org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.create(GenerateMutableProjection.scala:43)
      2. org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.create(GenerateMutableProjection.scala:33)
      3. org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:425)
      4. org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:422)
      4 frames
    7. Spark Project SQL
      BatchPythonEvaluation$$anonfun$doExecute$1.apply
      1. org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:255)
      2. org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:370)
      3. org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1.apply(python.scala:362)
      3 frames
    8. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
      2. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:710)
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      9. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
      11. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      12. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      13. org.apache.spark.scheduler.Task.run(Task.scala:88)
      14. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      14 frames
    9. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames