java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Stack Overflow | subho | 2 months ago
  1. 0

    sample spark CSV and JSON program not running in windows

    Stack Overflow | 2 months ago | subho
    java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
  2. 0

    Spark example word count execution failed for java

    Stack Overflow | 1 year ago | SakshamB
    java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
  3. 0

    Apache Spark Developers List - Spark Error - Failed to locate the winutils binary in the hadoop binary path

    nabble.com | 8 months ago
    java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark Error - Failed to locate the winutils binary in the hadoop binary path

    spark-dev | 2 years ago | Naveen Madhire
    java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
  6. 0

    Re: Spark Error - Failed to locate the winutils binary in the hadoop binary path

    spark-dev | 2 years ago | Naveen Madhire
    java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

  1. tyson925 1 times, last 6 months ago
5 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    at org.apache.hadoop.util.Shell.getQualifiedBinPath()
  2. Hadoop
    StringUtils.<clinit>
    1. org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
    2. org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
    3. org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
    4. org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
    4 frames
  3. Hadoop
    FileInputFormat.setInputPaths
    1. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
    1 frame
  4. Spark
    HadoopRDD$$anonfun$getJobConf$6.apply
    1. org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    2. org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    3. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    4. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    4 frames
  5. Scala
    Option.map
    1. scala.Option.map(Option.scala:146)
    1 frame
  6. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
    2. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    4 frames
  7. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  8. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    4 frames
  9. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  10. Spark
    RDD.take
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)
    3. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    4. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    5. org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    6. org.apache.spark.rdd.RDD.take(RDD.scala:1288)
    6 frames
  11. com.databricks.spark
    DefaultSource.createRelation
    1. com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174)
    2. com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169)
    3. com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147)
    4. com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:70)
    5. com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138)
    6. com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)
    7. com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)
    7 frames
  12. org.apache.spark
    ResolvedDataSource$.apply
    1. org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    1 frame
  13. Spark Project SQL
    SQLContext.load
    1. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    2. org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)
    2 frames
  14. Unknown
    json1.main
    1. json1$.main(json1.scala:22)
    2. json1.main(json1.scala)
    2 frames