org.apache.spark.api.python.PythonException

Traceback (most recent call last): File "/home/ubuntu/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main process() File "/home/ubuntu/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 2407, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 2407, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 2407, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 346, in func return f(iterator) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 1041, in <lambda> return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() File "/home/ubuntu/spark/python/pyspark/rdd.py", line 1041, in <genexpr> return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() File "<stdin>", line 9, in <lambda> TypeError: unorderable types: NoneType() < str()

Solutions on the web112

  • func return f(iterator) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 1041, in <lambda> return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() File "/home/ubuntu/spark/python/pyspark/rdd.py", line 1041, in <genexpr
  • via Stack Overflow by Tronald Dump
    , 5 months ago
    Traceback (most recent call last): File "/databricks/spark/python/pyspark/worker.py", line 172, in main process() File "/databricks/spark/python/pyspark/worker.py", line 167, in process serializer.dump_stream(func(split_index, iterator), outfile
  • , in process serializer.dump_stream(func(split_index, iterator), outfile) File "/home/ying/AWS_Tutorial/spark-1.4.0/python/pyspark/rdd.py", line 2318, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ying
  • Stack trace

    • org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/home/ubuntu/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main process() File "/home/ubuntu/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 2407, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 2407, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 2407, in pipeline_func return func(split, prev_func(split, iterator)) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 346, in func return f(iterator) File "/home/ubuntu/spark/python/pyspark/rdd.py", line 1041, in <lambda> return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() File "/home/ubuntu/spark/python/pyspark/rdd.py", line 1041, in <genexpr> return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() File "<stdin>", line 9, in <lambda> TypeError: unorderable types: NoneType() < str() at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193) at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    You are the first who have seen this exception. Write a tip to help other users and build your expert profile.