Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via nabble.com by Unknown author, 2 years ago
org.apache.hadoop.fs.FileSystem$Statistics.getThreadStatistics()
via Apache's JIRA Issue Tracker by Staffan Arvidsson, 1 year ago
org.apache.hadoop.fs.FileSystem$Statistics.getThreadStatistics()
via nabble.com by Unknown author, 2 years ago
org.apache.hadoop.fs.FileSystem$Statistics.getThreadStatistics()
via GitHub by abossenbroek
, 10 months ago
org.apache.hadoop.fs.FileSystem$Statistics.getThreadStatistics()
java.lang.NoSuchMethodException: org.apache.hadoop.fs.FileSystem$Statistics.getThreadStatistics()	at java.lang.Class.getDeclaredMethod(Class.java:2009)	at org.apache.spark.util.Utils$.invoke(Utils.scala:1733)	at org.apache.spark.deploy.SparkHadoopUtil$$anonfun$getFileSystemThreadStatistics$1.apply(SparkHadoopUtil.scala:178)	at org.apache.spark.deploy.SparkHadoopUtil$$anonfun$getFileSystemThreadStatistics$1.apply(SparkHadoopUtil.scala:178)	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)	at scala.collection.AbstractTraversable.map(Traversable.scala:105)	at org.apache.spark.deploy.SparkHadoopUtil.getFileSystemThreadStatistics(SparkHadoopUtil.scala:178)	at org.apache.spark.deploy.SparkHadoopUtil.getFSBytesReadOnThreadCallback(SparkHadoopUtil.scala:138)	at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:220)	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:210)	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:99)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)	at org.apache.spark.scheduler.Task.run(Task.scala:56)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)