java.io.IOException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Nutch - User - Nutch 2.1 / Hbase / Gora / Solr
    via by Unknown author,
  • I'm trying to get a basic Spark example running using mongoDB hadoop connector. I'm using Hadoop version *2.6.0*. I'm using version *1.3.1* of mongo-hadoop. I'm not sure where exactly to place the jars for this Hadoop version. Here are the locations I've tried: - $HADOOP_HOME/libexec/share/hadoop/mapreduce - $HADOOP_HOME/libexec/share/hadoop/mapreduce/lib - $HADOOP_HOME/libexec/share/hadoop/hdfs - $HADOOP_HOME/libexec/share/hadoop/hdfs/lib Here is a snippet of code I'm using to load the mongo collection into hdfs: {code} Configuration bsonConfig = new Configuration(); bsonConfig.set("mongo.job.input.format", "MongoInputFormat.class"); JavaPairRDD<Object,BSONObject> zipData = sc.newAPIHadoopFile("mongodb://127.0.0.1:27017/zipsdb.zips", MongoInputFormat.class, Object.class, BSONObject.class, bsonConfig); {code} I get the following error no matter where the jar is placed: {noformat} Exception in thread "main" java.io.IOException: No FileSystem for scheme: mongodb at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:505) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:774) at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopFile(JavaSparkContext.scala:471) {noformat} I dont see any other errors in hadoop logs. I suspect I'm missing something in my configuration, or that Hadoop 2.6.0 is not compatible with this connector. Any help is much appreciated.
    via by Navin Viswanath,
  • I'm trying to get a basic Spark example running using mongoDB hadoop connector. I'm using Hadoop version *2.6.0*. I'm using version *1.3.1* of mongo-hadoop. I'm not sure where exactly to place the jars for this Hadoop version. Here are the locations I've tried: - $HADOOP_HOME/libexec/share/hadoop/mapreduce - $HADOOP_HOME/libexec/share/hadoop/mapreduce/lib - $HADOOP_HOME/libexec/share/hadoop/hdfs - $HADOOP_HOME/libexec/share/hadoop/hdfs/lib Here is a snippet of code I'm using to load the mongo collection into hdfs: {code} Configuration bsonConfig = new Configuration(); bsonConfig.set("mongo.job.input.format", "MongoInputFormat.class"); JavaPairRDD<Object,BSONObject> zipData = sc.newAPIHadoopFile("mongodb://127.0.0.1:27017/zipsdb.zips", MongoInputFormat.class, Object.class, BSONObject.class, bsonConfig); {code} I get the following error no matter where the jar is placed: {noformat} Exception in thread "main" java.io.IOException: No FileSystem for scheme: mongodb at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:505) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:774) at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopFile(JavaSparkContext.scala:471) {noformat} I dont see any other errors in hadoop logs. I suspect I'm missing something in my configuration, or that Hadoop 2.6.0 is not compatible with this connector. Any help is much appreciated.
    via by Navin Viswanath,
    • java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:518) at cn.hadoop.hdfs.main.WordCountV2.run(WordCountV2.java:134) at cn.hadoop.hdfs.main.WordCountV2.main(WordCountV2.java:108)

    Users with the same issue

    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    jstrayer
    1 times, last one,
    tyson925
    3 times, last one,
    18 more bugmates