Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by W.Taiqi
, 5 months ago
\pyspark.zip\pyspark\worker.py", line 172, in process File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin- hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func return func(split, prev_func(split, iterator)) File "D:\spark-2.2.0-bin
via GitHub by ssallys
, 1 year ago
Traceback (most recent call last): File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main process() File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line
via GitHub by ssallys
, 1 year ago
Traceback (most recent call last): File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main process() File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line
via GitHub by md6nguyen
, 1 year ago
Traceback (most recent call last): File "/home/ubuntu/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main process() File "/home/ubuntu/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process
via Stack Overflow by thestackexchangeguy
, 9 months ago
Traceback (most recent call last): File "D:\Spark\python\lib\pyspark.zip\pyspark\worker.py", line 177, in main File "D:\Spark\python\lib\pyspark.zip\pyspark\worker.py", line 172, in process File "C:\Program Files\Anaconda3\lib\site-packages
via Stack Overflow by Rictus Degrenov
, 1 year ago
Traceback (most recent call last): File "/home/rictus/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 143, in main importlib.invalidate_caches() AttributeError: 'module' object has no attribute 'invalidate_caches'
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-
hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 177, in main
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-
hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 172, in process
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-
hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
    return func(split, prev_func(split, iterator))
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
    return func(split, prev_func(split, iterator))
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2423, in pipeline_func
        return func(split, prev_func(split, iterator))
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 346, in func
        return f(iterator)
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 1041, in <lambda>
        return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 1041, in <genexpr>
        return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
      File "D:\spark-2.2.0-bin-hadoop2.7\spark-2.2.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2053, in <lambda>
        return self.map(lambda x: (f(x), x))
      File 
"D:<filePath>", line 15, in <lambda>
    map(lambda x: x[1]).sortBy(lambda x:x.request_tm).map(lambda x: x.sku_id)
AttributeError: 'ResultIterable' object has no attribute 'request_tm'	at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)	at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:234)	at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)	at org.apache.spark.scheduler.Task.run(Task.scala:108)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)	at java.lang.Thread.run(Unknown Source)