Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Apache's JIRA Issue Tracker by Peter Haumer, 1 year ago
Not implemented by the TFS FileSystem implementation
via apache.org by Unknown author, 2 years ago
via Apache's JIRA Issue Tracker by Peter Haumer, 1 year ago
Not implemented by the TFS FileSystem implementation
via Stack Overflow by Govardhana Rao Ganji
, 2 years ago
via Stack Overflow by user3712581
, 2 years ago
Not implemented by the DistributedFileSystem FileSystem implementation
via Stack Overflow by AbtPst
, 1 year ago
Not implemented by the DistributedFileSystem FileSystem implementation
java.lang.UnsupportedOperationException: Not implemented by the TFS FileSystem implementation	at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:213)	at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2401)	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)	at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:389)	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)	at org.apache.spark.SparkContext$$anonfun$28.apply(SparkContext.scala:762)	at org.apache.spark.SparkContext$$anonfun$28.apply(SparkContext.scala:762)	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:172)	at scala.Option.map(Option.scala:145)	at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:172)	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:196)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1535)	at org.apache.spark.rdd.RDD.reduce(RDD.scala:900)	at org.apache.spark.api.java.JavaRDDLike$class.reduce(JavaRDDLike.scala:357)	at org.apache.spark.api.java.AbstractJavaRDDLike.reduce(JavaRDDLike.scala:46)	at com.databricks.apps.logs.LogAnalyzer.main(LogAnalyzer.java:60)