java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

,
via gitbooks.io by Unknown author

Download the winutils.exe for your Hadoop version: https://github.com/steveloughran/winutils .

Save it to HADOOP_HOME/bin

Solutions on the web

via Stack Overflow by subho
, 1 year ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via Stack Overflow by Brijan Elwadhi
, 1 year ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via Google Groups by Cheyenne Forbes, 1 year ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via GitHub by madhus84
, 1 year ago
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
via nabble.com by Unknown author, 1 year ago
Could not locate executable null\bin\winutils.exe in > the Hadoop binaries.
via mail-archive.com by Unknown author, 2 years ago
Could not locate executable null\bin\winutils.exe in > the Hadoop binaries.
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.(Shell.java:293)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
at org.apache.spark.sql.execution.datasources.json.JSONRelation.org$apache$spark$sql$execution$datasources$json$JSONRelation$$createBaseRdd(JSONRelation.scala:98)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4$$anonfun$apply$1.apply(JSONRelation.scala:115)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4$$anonfun$apply$1.apply(JSONRelation.scala:115)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:115)
at org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:109)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema$lzycompute(JSONRelation.scala:109)
at org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema(JSONRelation.scala:108)
at org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:636)
at org.apache.spark.sql.execution.datasources.LogicalRelation.(LogicalRelation.scala:37)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:1011)
at undefined.json1$.main(json1.scala:28)
at undefined.json1.main(json1.scala)

Users with the same issue

Samebug visitor profile picture
Unknown user
Once, 1 year ago
Samebug visitor profile picture
Unknown user
Once, 2 years ago
Samebug visitor profile picture
Unknown user
Once, 1 year ago
Samebug visitor profile picture
Unknown user
Once, 2 years ago
8 times, 9 months ago
18 more bugmates

Know the solutions? Share your knowledge to help other developers to debug faster.