java.io.IOException

Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Samebug tips1

Download the winutils.exe for your Hadoop version: https://github.com/steveloughran/winutils . Save it to HADOOP_HOME/bin


rprp

Don't give up yet. Our experts can help. Paste your full stack trace to get a solution.

Solutions on the web161

  • Stack trace

    • java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362) at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015) at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015) at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176) at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176) at scala.Option.map(Option.scala:146) at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.take(RDD.scala:1288) at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174) at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169) at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147) at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:70) at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138) at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40) at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28) at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119) at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153) at json1$.main(json1.scala:22) at json1.main(json1.scala)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    Unknown visitor
    Unknown visitorOnce,
    Unknown visitor
    Unknown visitorOnce,
    tyson925tyson925
    Once,
    Unknown visitor
    Unknown visitorOnce,
    Unknown visitor
    Unknown visitorOnce,
    18 more bugmates