java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V

GitHub | timodonnell | 1 year ago
  1. 0

    no such method error: closeQuietly

    GitHub | 1 year ago | timodonnell
    java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
  2. 0

    NoSuchMethodError at org.apache.hadoop.hdfs.DFSInputStream

    Stack Overflow | 3 years ago | Garath
    java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
  3. 0

    RE: java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly

    accumulo-user | 4 years ago | Newman, Elise
    java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    HDFS, Hadoop 2.2 and Spark error

    Google Groups | 3 years ago | Richard Conway
    java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io </div> <div> /Closeable;)V at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.j </div> <div> ava:1052) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java </div> <div> :533) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream </div> <div> .java:749)
  6. 0

    Library problem in "commons-io" (Probably some conflict with Hadoop)

    Google Groups | 3 years ago | Yu-Ting Chen
    java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.NoSuchMethodError

      org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V

      at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader()
    2. Apache Hadoop HDFS
      DFSInputStream.read
      1. org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1186)
      2. org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:575)
      3. org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:792)
      4. org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:839)
      5. org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:644)
      5 frames
    3. Java RT
      FilterInputStream.read
      1. java.io.FilterInputStream.read(FilterInputStream.java:83)
      1 frame
    4. org.seqdoop.hadoop_bam
      AnySAMInputFormat.createRecordReader
      1. org.seqdoop.hadoop_bam.SAMFormat.inferFromData(SAMFormat.java:53)
      2. org.seqdoop.hadoop_bam.AnySAMInputFormat.getFormat(AnySAMInputFormat.java:147)
      3. org.seqdoop.hadoop_bam.AnySAMInputFormat.createRecordReader(AnySAMInputFormat.java:179)
      3 frames
    5. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:131)
      2. org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:104)
      3. org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:66)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      9. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      11. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      12. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      14. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      15. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      16. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      17. org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
      18. org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
      19. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      20. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      21. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      22. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      23. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      24. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      25. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      26. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      27. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      28. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      29. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      30. org.apache.spark.scheduler.Task.run(Task.scala:64)
      31. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
      31 frames
    6. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames