There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Using itertools.groupby in pyspark but fail
    via Stack Overflow by JiaMing Lin
  • Python Spark submit job on cluster issues
    via Stack Overflow by FLFLFLFL
  • Apache-Spark load files from HDFS
    via Stack Overflow by Ruofan Kong
  • Add date field to RDD in Spark
    via by Unknown author,
  • PySpark Job throwing IOError
    via by Unknown author,
    • org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/", line 111, in main process() File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/", line 106, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/", line 267, in dump_stream bytes = self.serializer.dumps(vs) File "/opt/zeppelin-0.6.0-bin-netinst/interpreter/spark/pyspark/", line 415, in dumps return pickle.dumps(obj, protocol) PicklingError: Can't pickle <type 'itertools._grouper'>: attribute lookup itertools._grouper failed at org.apache.spark.api.python.PythonRunner$$anon$ at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at at org.apache.spark.executor.Executor$ at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$
    No Bugmate found.