java.io.IOException: Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)

GitHub | ww102111 | 3 months ago
  1. 0

    GitHub comment 4#242973525

    GitHub | 3 months ago | ww102111
    java.io.IOException: Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)
  2. 0

    Spark, mail # user - Error in "java.io.IOException: No input paths specified in job" - 2016-03-17, 13:22

    search-hadoop.com | 8 months ago
    java.io.IOException: No input paths specified in job at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:201) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
  3. 0

    Re: spark with standalone HBase

    apache.org | 1 year ago
    java.io.IOException: No table was provided. at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:154)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: Using Parquet from an interactive Spark shell

    Google Groups | 3 years ago | Uri Laserson
    java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85)
  6. 0

    Eclipse-sdk-3.0.0 crash on start: An unexpected exception has been detected in native code outside the VM

    http://bugs.gentoo.org/ | 1 decade ago | scereze
    java.io.IOException: An error occurred while locking file "/home/seb/.eclipse/or g.eclipse.platform_3.0.0/configuration/org.eclipse.core.runtime/.manager/.fileTa bleLock": "Value too large for defined data type". A probably reason is that the file system or Runtime Environment does not support file locking. You may want to choose a different location, or disable file locking (using the osgi.locking property), but this can cause data corruption. at org.eclipse.core.runtime.adaptor.Locker_JavaNio.lock(Locker_JavaNio.j ava:42) at org.eclipse.osgi.service.datalocation.FileManager.lock(FileManager.ja va:219) at org.eclipse.osgi.service.datalocation.FileManager.open(FileManager.ja va:420) at org.eclipse.core.internal.runtime.InternalPlatform.initializeRuntimeF ileManager(InternalPlatform.java:390) at org.eclipse.core.internal.runtime.InternalPlatform.start(InternalPlat form.java:383) at org.eclipse.core.internal.runtime.PlatformActivator.startInternalPlat form(PlatformActivator.java:251) at org.eclipse.core.internal.runtime.PlatformActivator.start(PlatformAct ivator.java:64) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(Bund leContextImpl.java:958)

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)

      at org.apache.spark.rdd.HadoopRDD.getPartitions()
    2. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
      2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
      3 frames
    3. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    4. Spark
      RDD.partitions
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
      1 frame