org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main process() File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func File "build/bdist.linux-x86_64/egg/tensorspark/core/spark_session.py", line 177, in _spark_run_fn File "build/bdist.linux-x86_64/egg/tensorspark/core/session_worker.py", line 34, in run self._run_fn(splitIndex, partition, self._param_bc.value) File "build/bdist.linux-x86_64/egg/tensorspark/core/session_worker.py", line 68, in _run_fn sutil.restore_session_hdfs(sess, user, session_path, session_meta_path, tmp_local_dir, host, port) File "build/bdist.linux-x86_64/egg/tensorspark/core/session_util.py", line 81, in restore_session_hdfs saver = tf.train.import_meta_graph(local_meta_path) File "/home/etri/anaconda3/envs/tensorflow2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1458, in import_meta_graph return _import_meta_graph_def(read_meta_graph_file(meta_graph_or_file)) File "/home/etri/anaconda3/envs/tensorflow2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1310, in read_meta_graph_file raise IOError("File %s does not exist." % filename) IOError: File /tmp/session_mnist_try_1476098552130.meta does not exist.

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • GitHub comment 3#252594408
    via GitHub by ssallys
    ,
  • GitHub comment 902#264089144
    via GitHub by mooperd
    ,
  • Apache-Spark load files from HDFS
    via Stack Overflow by Ruofan Kong
    ,
  • Add date field to RDD in Spark
    via by Unknown author,
  • PySpark Job throwing IOError
    via by Unknown author,
    • org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main process() File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process serializer.dump_stream(func(split_index, iterator), outfile) File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func File "/usr/local/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 2371, in pipeline_func File "build/bdist.linux-x86_64/egg/tensorspark/core/spark_session.py", line 177, in _spark_run_fn File "build/bdist.linux-x86_64/egg/tensorspark/core/session_worker.py", line 34, in run self._run_fn(splitIndex, partition, self._param_bc.value) File "build/bdist.linux-x86_64/egg/tensorspark/core/session_worker.py", line 68, in _run_fn sutil.restore_session_hdfs(sess, user, session_path, session_meta_path, tmp_local_dir, host, port) File "build/bdist.linux-x86_64/egg/tensorspark/core/session_util.py", line 81, in restore_session_hdfs saver = tf.train.import_meta_graph(local_meta_path) File "/home/etri/anaconda3/envs/tensorflow2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1458, in import_meta_graph return _import_meta_graph_def(read_meta_graph_file(meta_graph_or_file)) File "/home/etri/anaconda3/envs/tensorflow2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1310, in read_meta_graph_file raise IOError("File %s does not exist." % filename) IOError: File /tmp/session_mnist_try_1476098552130.meta does not exist. at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193) at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) at org.apache.spark.scheduler.Task.run(Task.scala:85) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
    No Bugmate found.