Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by Backtrack
, 1 year ago
(iterator) File "E:\Work\spark\installtion\spark\python\lib\pyspark.zip\pyspark\shuffle.py", line 236, in mergeValues for k, v in iterator: File "E:/Work/Python1/work/spark/streamexample.py", line 159, in <lambda> with_hash = stream.map
via Stack Overflow by 谢一男
, 1 year ago
Traceback (most recent call last): File "/usr/local/spark/ python/lib/pyspark.zip/pyspark/worker.py", line 111, in main process() File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
via Stack Overflow by svs teja
, 10 months ago
return f(iterator) File "/usr/local/lib/python3.4/dist-packages/pyspark/rdd.py", line 1842, in combineLocally merger.mergeValues(iterator) File "/usr/local/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 236, in mergeValues for k, v in iterator
via GitHub by infosec-alchemist
, 4 months ago
Traceback (most recent call last): File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main process() File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
via Stack Overflow by Algina
, 1 year ago
Traceback (most recent call last): File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 175, in main process() File "/home/alg/programs/spark-2.0.2-bin-hadoop2.7/python/lib/pyspark.zip/pyspark
via Stack Overflow by TheStupidOne
, 2 years ago
Traceback (most recent call last): File "/usr/local/spark/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main command = pickleSer._read_with_length(infile) File "/usr/local/spark/spark-1.6.0-bin-hadoop2.6
org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "E:\Work\spark\installtion\spark\python\lib\pyspark.zip\pyspark\worker.py", line 172, in main File "E:\Work\spark\installtion\spark\python\lib\pyspark.zip\pyspark\worker.py", line 167, in process File "E:\Work\spark\installtion\spark\python\pyspark\rdd.py", line 2371, in pipeline_func return func(split, prev_func(split, iterator)) File "E:\Work\spark\installtion\spark\python\pyspark\rdd.py", line 2371, in pipeline_func return func(split, prev_func(split, iterator)) File "E:\Work\spark\installtion\spark\python\pyspark\rdd.py", line 317, in func return f(iterator) File "E:\Work\spark\installtion\spark\python\pyspark\rdd.py", line 1792, in combineLocally merger.mergeValues(iterator) File "E:\Work\spark\installtion\spark\python\lib\pyspark.zip\pyspark\shuffle.py", line 236, in mergeValues for k, v in iterator: File "E:/Work/Python1/work/spark/streamexample.py", line 159, in <lambda> with_hash = stream.map(lambda po : createmd5Hash(po)).reduceByKey(lambda s1,s2:s1) File "E:/Work/Python1/work/spark/streamexample.py", line 31, in createmd5Hash data = json.loads(input_line) File "C:\Python34\lib\json\__init__.py", line 318, in loads return _default_decoder.decode(s) File "C:\Python34\lib\json\decoder.py", line 343, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Python34\lib\json\decoder.py", line 361, in raw_decode raise ValueError(errmsg("Expecting value", s, err.value)) from NoneValueError: Expecting value: line 1 column 1 (char 0) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193) at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:390) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) at org.apache.spark.scheduler.Task.run(Task.scala:85) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)