Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via spark-user by Vipul Pandey, 1 year ago
This is supposed to be overridden by subclasses.
via oschina.net by Unknown author, 1 year ago
via Apache's JIRA Issue Tracker by Hari Shreedharan, 1 year ago
This is supposed to be overridden by subclasses.
via Spring JIRA by Thomas Risberg, 1 year ago
This is supposed to be overridden by subclasses.
via Spring JIRA by Thomas Risberg, 1 year ago
This is supposed to be overridden by subclasses.
java.lang.UnsupportedOperationException: This is supposed to be overridden by subclasses.	at com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:30042)	at com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)	at undefined.$Proxy14.getFileInfo(Unknown Source)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)	at java.lang.reflect.Method.invoke(Method.java:597)	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)	at undefined.$Proxy14.getFileInfo(Unknown Source)	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:628)	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1545)	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:805)	at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1670)	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1616)	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:174)	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:205)	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:140)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)	at org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)	at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:58)	at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:354)