Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    via gitbooks.io by Unknown author

    Download the winutils.exe for your Hadoop version: https://github.com/steveloughran/winutils .

    Save it to HADOOP_HOME/bin

Solutions on the web

via Stack Overflow by Elvish_Blade
, 1 year ago
via Stack Overflow by Amitabh Ranjan
, 2 years ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via Cloudera Open Source by Pavel Ganelin, 2 years ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via Cloudera Open Source by Pavel Ganelin, 1 year ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via cloudera.org by Unknown author, 2 years ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via Google Groups by Cheyenne Forbes, 1 year ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.	at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)	at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)	at org.apache.hadoop.util.Shell.(Shell.java:293)	at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)	at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)	at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)	at scala.Option.map(Option.scala:146)	at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)	at org.apache.spark.rdd.RDD.take(RDD.scala:1288)	at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174)	at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169)	at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147)	at com.databricks.spark.csv.CsvRelation.(CsvRelation.scala:70)	at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138)	at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)	at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)	at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)	at json1$.main(json1.scala:22)	at json1.main(json1.scala)