org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 167, in process File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 106, in <lambda> File "<string>", line 1, in <lambda> File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 70, in <lambda> File "<stdin>", line 8, in eventSplit IndexError: list index out of range

Stack Overflow | ML_Passion | 3 months ago
  1. 0

    IndexError: list index out of range Error while writing to parquet or csv file

    Stack Overflow | 3 months ago | ML_Passion
    org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 167, in process File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 106, in <lambda> File "<string>", line 1, in <lambda> File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 70, in <lambda> File "<stdin>", line 8, in eventSplit IndexError: list index out of range
  2. 0

    How to resolve AttributeError: Can't get attribute '_create_row_inbound_converter'

    Stack Overflow | 11 months ago | AlaShiban
    org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main process() File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream vs = list(itertools.islice(iterator, batch)) File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 139, in load_stream yield self._read_with_length(stream) File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length return self.loads(obj) File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 418, in loads return pickle.loads(obj, encoding=encoding) AttributeError: Can't get attribute '_create_row_inbound_converter' on <module 'pyspark.sql.types' from '/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py'>
  3. 0

    Functions from custom module not working in PySpark, but they work when inputted in interactive mode

    Stack Overflow | 9 months ago | RKD314
    org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main command = pickleSer._read_with_length(infile) File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length return self.loads(obj) File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads return pickle.loads(obj) File "test2.py", line 16, in <module> str2numUDF=F.udf(lambda s: str2num(s), t.IntegerType()) File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1460, in udf return UserDefinedFunction(f, returnType) File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1422, in __init__ self._judf = self._create_judf(name) File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/sql/functions.py", line 1430, in _create_judf pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command, self) File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2317, in _prepare_for_python_RDD [x._jbroadcast for x in sc._pickled_broadcast_vars], AttributeError: 'NoneType' object has no attribute '_pickled_broadcast_vars'
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Why do I have an error of "IndexError: list index out of range" when I do TF-IDF using pyspark.ml.feature?

    Stack Overflow | 4 months ago | kiseliu
    org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/Users/lyj/Programs/Apache/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main process() File "/Users/lyj/Programs/Apache/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/Users/lyj/Programs/Apache/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream vs = list(itertools.islice(iterator, batch)) File "/mypath/classfication.py", line 20, in <lambda> getData = splitData.map(lambda line: [labelMap[line[2]], list(jieba.cut(line[6]+line[13]))]) IndexError: list index out of range
  6. 0

    Pyspark - recommendation engine - unsupported operand type(s) for +: 'int' and 'str'

    Stack Overflow | 3 months ago | JohnB
    org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "c:\spark\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main File "c:\spark\python\lib\pyspark.zip\pyspark\worker.py", line 167, in process File "c:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 263, in dump_stream vs = list(itertools.islice(iterator, batch)) File "c:\spark\python\lib\pyspark.zip\pyspark\rdd.py", line 1306, in takeUpToNumLeft File "c:/sparkcourse/test-recommendation.py", line 8, in get_counts_and_averages return ID_and_ratings_tuple[0], (nratings, float(sum(x for x in ID_and_ratings_tuple[1]))/nratings) TypeError: unsupported operand type(s) for +: 'int' and 'str'

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.api.python.PythonException

      Traceback (most recent call last): File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 167, in process File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 106, in <lambda> File "<string>", line 1, in <lambda> File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 70, in <lambda> File "<stdin>", line 8, in eventSplit IndexError: list index out of range

      at org.apache.spark.api.python.PythonRunner$$anon$1.read()
    2. Spark
      PythonRunner.compute
      1. org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
      2. org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
      3. org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
      3 frames
    3. org.apache.spark
      BatchEvalPythonExec$$anonfun$doExecute$1.apply
      1. org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:124)
      2. org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:68)
      2 frames
    4. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
      2. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      9. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      11. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      12. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      14. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      15. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
      16. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
      17. org.apache.spark.scheduler.Task.run(Task.scala:85)
      18. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      18 frames
    5. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames