Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by ML_Passion
, 1 year ago
Traceback (most recent call last): File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark
via Stack Overflow by Ale Xis
, 10 months ago
Traceback (most recent call last): File "/mnt/yarn/usercache/hadoop/appcache/application_1496062368566_0011/container_1496062368566_0011_01_000002/pyspark.zip/pyspark/worker.py", line 161, in main func, profiler, deserializer, serializer
via Stack Overflow by ML_Pro
, 10 months ago
Traceback (most recent call last): File "C:\Spark\python\lib\pyspark.zip\pyspark\worker.py", line 174, in main File "C:\Spark\python\lib\pyspark.zip\pyspark\worker.py", line 169, in process File "C:\Spark\python\lib\pyspark.zip\pyspark\worker.py
via Stack Overflow by smm
, 1 year ago
Traceback (most recent call last): File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main process() File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
via Stack Overflow by user5147250
, 1 year ago
Traceback (most recent call last): File "/ephemeral/usr/hdp/2.3.4.33-1/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main command = pickleSer._read_with_length(infile) File "/ephemeral/usr/hdp/2.3.4.33-1/spark/python/lib
via Stack Overflow by Print-ABC
, 10 months ago
Traceback (most recent call last): File "/home/main/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main process() File "/home/main/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main
  File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 167, in process
  File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 106, in <lambda>
  File "<string>", line 1, in <lambda>
  File "C:\Users\yrxt028\Downloads\spark-2.0.0\spark-2.0.0\python\lib\pyspark.zip\pyspark\worker.py", line 70, in <lambda>
  File "<stdin>", line 8, in eventSplit
IndexError: list index out of range	at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)	at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:234)	at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)	at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:124)	at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$doExecute$1.apply(BatchEvalPythonExec.scala:68)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)	at org.apache.spark.scheduler.Task.run(Task.scala:85)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)