org.apache.spark.SparkException: Python worker did not connect back in time

Stack Overflow | Jay.b.j | 1 month ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Exception in task 0.0 in stage 0.0 when running pyspark

    Stack Overflow | 1 month ago | Jay.b.j
    org.apache.spark.SparkException: Python worker did not connect back in time
  2. 0

    GitHub comment 1271#264059226

    GitHub | 4 months ago | dthboyd
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.io.IOException: Cannot run program "/Users/davidboyd/anaconda": error=13, Permission denied
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

    1 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Python worker did not connect back in time

      at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker()
    2. Spark
      PythonRDD.compute
      1. org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:138)
      2. org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:67)
      3. org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116)
      4. org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128)
      5. org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
      5 frames