org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb

JIRA | Russell Jurney | 2 years ago
  1. 0

    ----------------------- 2015-01-26 14:40:26,421 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed file:/tmp/atomic_nile/views_by_user.avro/part-r-00000.avro:704643072+10604132 2015-01-26 14:40:26,422 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoStorage - Store Location Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-rjurney/mapred/local/localRunner/rjurney/job_local2089078013_0001/job_local2089078013_0001.xml For URI: mongodb://localhost:27017/atomic_nile.views_by_user 2015-01-26 14:40:26,423 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoStorage - OutputFormat... com.mongodb.hadoop.MongoOutputFormat@7c945cbe 2015-01-26 14:40:26,423 [Thread-46] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete. 2015-01-26 14:40:26,424 [Thread-46] INFO com.mongodb.hadoop.pig.MongoStorage - Store Location Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-rjurney/mapred/local/localRunner/rjurney/job_local2089078013_0001/job_local2089078013_0001.xml For URI: mongodb://localhost:27017/atomic_nile.views_by_user should cleanup job 2015-01-26 14:40:26,426 [Thread-46] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local2089078013_0001 java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at com.mongodb.hadoop.MongoOutputFormat.getRecordWriter(MongoOutputFormat.java:44) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:81) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:644) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 2015-01-26 14:40:29,983 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure. 2015-01-26 14:40:29,983 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local2089078013_0001 has failed! Stop running all dependent jobs 2015-01-26 14:40:29,983 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2015-01-26 14:40:29,984 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2015-01-26 14:40:29,987 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2015-01-26 14:40:29,988 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed! 2015-01-26 14:40:29,989 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 2.6.0 0.14.0-SNAPSHOT rjurney 2015-01-26 14:40:22 2015-01-26 14:40:29 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_local2089078013_0001 views_by_user MAP_ONLY Message: Job failed! mongodb://localhost:27017/atomic_nile.views_by_user, Input(s): Failed to read data from "/tmp/atomic_nile/views_by_user.avro" Output(s): Failed to produce result in "mongodb://localhost:27017/atomic_nile.views_by_user" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local2089078013_0001 2015-01-26 14:40:29,989 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2015-01-26 14:40:29,991 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb 2015-01-26 14:40:29,991 [main] ERROR org.apache.pig.tools.grunt.Grunt - org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:535) at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280) at org.apache.pig.PigServer.launchPlan(PigServer.java:1390) at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375) at org.apache.pig.PigServer.execute(PigServer.java:1364) at org.apache.pig.PigServer.executeBatch(PigServer.java:415) at org.apache.pig.PigServer.executeBatch(PigServer.java:398) at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81) at org.apache.pig.Main.run(Main.java:624) at org.apache.pig.Main.main(Main.java:170) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.io.IOException: No FileSystem for scheme: mongodb at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.pig.StoreFunc.cleanupOnFailureImpl(StoreFunc.java:193) at org.apache.pig.StoreFunc.cleanupOnFailure(StoreFunc.java:161) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:526) ... 18 more Details also at logfile: /private/tmp/pig_1422312019838.log 2015-01-26 14:40:30,007 [main] INFO org.apache.pig.Main - Pig script completed in 10 seconds and 263 milliseconds (10263 ms)

    JIRA | 2 years ago | Russell Jurney
    org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb
  2. 0

    ----------------------- 2015-01-26 14:40:26,421 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed file:/tmp/atomic_nile/views_by_user.avro/part-r-00000.avro:704643072+10604132 2015-01-26 14:40:26,422 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoStorage - Store Location Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-rjurney/mapred/local/localRunner/rjurney/job_local2089078013_0001/job_local2089078013_0001.xml For URI: mongodb://localhost:27017/atomic_nile.views_by_user 2015-01-26 14:40:26,423 [LocalJobRunner Map Task Executor #0] INFO com.mongodb.hadoop.pig.MongoStorage - OutputFormat... com.mongodb.hadoop.MongoOutputFormat@7c945cbe 2015-01-26 14:40:26,423 [Thread-46] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete. 2015-01-26 14:40:26,424 [Thread-46] INFO com.mongodb.hadoop.pig.MongoStorage - Store Location Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, file:/tmp/hadoop-rjurney/mapred/local/localRunner/rjurney/job_local2089078013_0001/job_local2089078013_0001.xml For URI: mongodb://localhost:27017/atomic_nile.views_by_user should cleanup job 2015-01-26 14:40:26,426 [Thread-46] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local2089078013_0001 java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected at com.mongodb.hadoop.MongoOutputFormat.getRecordWriter(MongoOutputFormat.java:44) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:81) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:644) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 2015-01-26 14:40:29,983 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure. 2015-01-26 14:40:29,983 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local2089078013_0001 has failed! Stop running all dependent jobs 2015-01-26 14:40:29,983 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2015-01-26 14:40:29,984 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2015-01-26 14:40:29,987 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 2015-01-26 14:40:29,988 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed! 2015-01-26 14:40:29,989 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 2.6.0 0.14.0-SNAPSHOT rjurney 2015-01-26 14:40:22 2015-01-26 14:40:29 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_local2089078013_0001 views_by_user MAP_ONLY Message: Job failed! mongodb://localhost:27017/atomic_nile.views_by_user, Input(s): Failed to read data from "/tmp/atomic_nile/views_by_user.avro" Output(s): Failed to produce result in "mongodb://localhost:27017/atomic_nile.views_by_user" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local2089078013_0001 2015-01-26 14:40:29,989 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2015-01-26 14:40:29,991 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb 2015-01-26 14:40:29,991 [main] ERROR org.apache.pig.tools.grunt.Grunt - org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:535) at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280) at org.apache.pig.PigServer.launchPlan(PigServer.java:1390) at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375) at org.apache.pig.PigServer.execute(PigServer.java:1364) at org.apache.pig.PigServer.executeBatch(PigServer.java:415) at org.apache.pig.PigServer.executeBatch(PigServer.java:398) at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81) at org.apache.pig.Main.run(Main.java:624) at org.apache.pig.Main.main(Main.java:170) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.io.IOException: No FileSystem for scheme: mongodb at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.pig.StoreFunc.cleanupOnFailureImpl(StoreFunc.java:193) at org.apache.pig.StoreFunc.cleanupOnFailure(StoreFunc.java:161) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:526) ... 18 more Details also at logfile: /private/tmp/pig_1422312019838.log 2015-01-26 14:40:30,007 [main] INFO org.apache.pig.Main - Pig script completed in 10 seconds and 263 milliseconds (10263 ms)

    JIRA | 2 years ago | Russell Jurney
    org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb
  3. 0

    error writing to mongodb from pig

    Stack Overflow | 1 year ago | onebitaway
    org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.io.IOException: No FileSystem for scheme: mongodb
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    MongoDB Hadoop error : no FileSystem for scheme:mongodb

    Stack Overflow | 2 years ago | Navin Viswanath
    java.io.IOException: No FileSystem for scheme: mongodb
  6. 0

    I'm trying to get a basic Spark example running using mongoDB hadoop connector. I'm using Hadoop version *2.6.0*. I'm using version *1.3.1* of mongo-hadoop. I'm not sure where exactly to place the jars for this Hadoop version. Here are the locations I've tried: - $HADOOP_HOME/libexec/share/hadoop/mapreduce - $HADOOP_HOME/libexec/share/hadoop/mapreduce/lib - $HADOOP_HOME/libexec/share/hadoop/hdfs - $HADOOP_HOME/libexec/share/hadoop/hdfs/lib Here is a snippet of code I'm using to load the mongo collection into hdfs: {code} Configuration bsonConfig = new Configuration(); bsonConfig.set("mongo.job.input.format", "MongoInputFormat.class"); JavaPairRDD<Object,BSONObject> zipData = sc.newAPIHadoopFile("mongodb://127.0.0.1:27017/zipsdb.zips", MongoInputFormat.class, Object.class, BSONObject.class, bsonConfig); {code} I get the following error no matter where the jar is placed: {noformat} Exception in thread "main" java.io.IOException: No FileSystem for scheme: mongodb at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:505) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:774) at org.apache.spark.api.java.JavaSparkContext.newAPIHadoopFile(JavaSparkContext.scala:471) {noformat} I dont see any other errors in hadoop logs. I suspect I'm missing something in my configuration, or that Hadoop 2.6.0 is not compatible with this connector. Any help is much appreciated.

    JIRA | 2 years ago | Navin Viswanath
    java.io.IOException: No FileSystem for scheme: mongodb

  1. tyson925 3 times, last 4 months ago
14 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    No FileSystem for scheme: mongodb

    at org.apache.hadoop.fs.FileSystem.getFileSystemClass()
  2. Hadoop
    Path.getFileSystem
    1. org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
    2. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
    3. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
    4. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
    5. org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
    6. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
    7. org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    7 frames
  3. org.apache.pig
    Main.main
    1. org.apache.pig.StoreFunc.cleanupOnFailureImpl(StoreFunc.java:193)
    2. org.apache.pig.StoreFunc.cleanupOnFailure(StoreFunc.java:161)
    3. org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:526)
    4. org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
    5. org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
    6. org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
    7. org.apache.pig.PigServer.execute(PigServer.java:1364)
    8. org.apache.pig.PigServer.executeBatch(PigServer.java:415)
    9. org.apache.pig.PigServer.executeBatch(PigServer.java:398)
    10. org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:171)
    11. org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
    12. org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
    13. org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
    14. org.apache.pig.Main.run(Main.java:624)
    15. org.apache.pig.Main.main(Main.java:170)
    15 frames
  4. Java RT
    Method.invoke
    1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    4. java.lang.reflect.Method.invoke(Method.java:606)
    4 frames
  5. Hadoop
    RunJar.main
    1. org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    2. org.apache.hadoop.util.RunJar.main(RunJar.java:136)
    2 frames