java.io.IOException: java.lang.reflect.InvocationTargetException

Apache's JIRA Issue Tracker | Raymond Lau | 2 years ago
  1. 0

    When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional list<MyStruct> myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs array<struct<myenumtype: string, field2: string, field3: string>> ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.<init>(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65) ... 16 more {code}

    Apache's JIRA Issue Tracker | 2 years ago | Raymond Lau
    java.io.IOException: java.lang.reflect.InvocationTargetException
  2. 0

    issuehub.io

    issuehub.io | 12 months ago
    java.lang.Exception: java.lang.NoSuchFieldError: com.google.android.gms.R$string.common_google_play_services_unsupported_text
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Tomcat not starting with 64bit java

    Stack Overflow | 6 years ago | rhinds
    java.lang.NoSuchFieldError: threadAllocatedMemorySupport

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.NoSuchFieldError

      DECIMAL

      at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter()
    2. Hive Query Language
      DataWritableReadSupport.prepareForRead
      1. org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146)
      2. org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31)
      3. org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:64)
      4. org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.<init>(DataWritableGroupConverter.java:40)
      5. org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.<init>(DataWritableRecordConverter.java:32)
      6. org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128)
      6 frames
    3. Parquet
      ParquetRecordReader.initialize
      1. parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142)
      2. parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118)
      3. parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107)
      3 frames
    4. Hive Query Language
      CombineHiveRecordReader.<init>
      1. org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:92)
      2. org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
      3. org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51)
      4. org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
      4 frames
    5. Java RT
      Constructor.newInstance
      1. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      2. sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
      3. sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
      4. java.lang.reflect.Constructor.newInstance(Constructor.java:526)
      4 frames
    6. Hive Shims
      HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader
      1. org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:332)
      2. org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:293)
      3. org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:407)
      3 frames
    7. Hive Query Language
      CombineHiveInputFormat.getRecordReader
      1. org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:560)
      1 frame
    8. Hadoop
      YarnChild$2.run
      1. org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:168)
      2. org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:409)
      3. org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
      4. org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
      4 frames
    9. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:415)
      2 frames
    10. Hadoop
      UserGroupInformation.doAs
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
      1 frame
    11. Hadoop
      YarnChild.main
      1. org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
      1 frame