java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Stack Overflow | subho | 2 months ago
  1. 0

    sample spark CSV and JSON program not running in windows

    Stack Overflow | 2 months ago | subho
    java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
  2. 0

    Apache Spark Server installation requires Hadoop? Not automatically installed?

    Stack Overflow | 5 months ago | jgp
    java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  1. tyson925 1 times, last 6 months ago
5 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    at org.apache.hadoop.util.Shell.getQualifiedBinPath()
  2. Hadoop
    StringUtils.<clinit>
    1. org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
    2. org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
    3. org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
    4. org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
    4 frames
  3. Hadoop
    FileInputFormat.setInputPaths
    1. org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:447)
    1 frame
  4. org.apache.spark
    JSONRelation$$anonfun$4$$anonfun$apply$1.apply
    1. org.apache.spark.sql.execution.datasources.json.JSONRelation.org$apache$spark$sql$execution$datasources$json$JSONRelation$$createBaseRdd(JSONRelation.scala:98)
    2. org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4$$anonfun$apply$1.apply(JSONRelation.scala:115)
    3. org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4$$anonfun$apply$1.apply(JSONRelation.scala:115)
    3 frames
  5. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  6. org.apache.spark
    JSONRelation$$anonfun$4.apply
    1. org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:115)
    2. org.apache.spark.sql.execution.datasources.json.JSONRelation$$anonfun$4.apply(JSONRelation.scala:109)
    2 frames
  7. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  8. org.apache.spark
    JSONRelation.dataSchema
    1. org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema$lzycompute(JSONRelation.scala:109)
    2. org.apache.spark.sql.execution.datasources.json.JSONRelation.dataSchema(JSONRelation.scala:108)
    2 frames
  9. Spark Project SQL
    HadoopFsRelation.schema
    1. org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:636)
    2. org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:635)
    2 frames
  10. org.apache.spark
    LogicalRelation.<init>
    1. org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:37)
    1 frame
  11. Spark Project SQL
    SQLContext.jsonFile
    1. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
    2. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
    3. org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244)
    4. org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:1011)
    4 frames
  12. Unknown
    json1.main
    1. json1$.main(json1.scala:28)
    2. json1.main(json1.scala)
    2 frames