org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29

Apache's JIRA Issue Tracker | Sergio Peña | 2 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    [Hive-dev] [jira] [Created] (HIVE-9873) Hive on MR throws DeprecatedParquetHiveInput exception - Grokbase

    grokbase.com | 2 years ago
    org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29
  2. 0

    The following error is thrown when information about columns is changed on {{projectionPusher.pushProjectionsAndFilters}}. {noformat} 2015-02-26 15:56:40,275 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29 at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:226) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:136) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29 at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:355) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:105) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:224) ... 11 more Caused by: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29 at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:199) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:52) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) ... 15 more {noformat} The bug is in {{ParquetRecordReaderWrapper}}. We store metastore such as the list of columns in the {{Configuration/JobConf}}. The issue is that this metadata is incorrect until the call to {{projectionPusher.pushProjectionsAndFilters}}. In the current codebase we don't use the configuration object returned from {{projectionPusher.pushProjectionsAndFilters}} in other sections of code such as creation and initialization of {{realReader}}. The end result is that parquet is given an empty read schema and returns all nulls. Since the join key is null, no records are joined.

    Apache's JIRA Issue Tracker | 2 years ago | Sergio Peña
    org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29
  3. 0

    The following error is thrown when information about columns is changed on {{projectionPusher.pushProjectionsAndFilters}}. {noformat} 2015-02-26 15:56:40,275 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29 at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:226) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:136) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29 at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:355) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:105) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:224) ... 11 more Caused by: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29 at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:199) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:52) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) ... 15 more {noformat} The bug is in {{ParquetRecordReaderWrapper}}. We store metastore such as the list of columns in the {{Configuration/JobConf}}. The issue is that this metadata is incorrect until the call to {{projectionPusher.pushProjectionsAndFilters}}. In the current codebase we don't use the configuration object returned from {{projectionPusher.pushProjectionsAndFilters}} in other sections of code such as creation and initialization of {{realReader}}. The end result is that parquet is given an empty read schema and returns all nulls. Since the join key is null, no records are joined.

    Apache's JIRA Issue Tracker | 2 years ago | Sergio Peña
    org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: java.io.IOException: DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    6.7. Known Issues for Hive - Hortonworks Data Platform 2.1.1 for Windows

    hortonworks.com | 2 years ago
    java.io.IOException: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable
  6. 0

    Issue in Hive while joining two ORC partitioned managed tables and resolved after drop and reload the tables

    Stack Overflow | 2 years ago | Gopalakrishnan Veeran
    java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.RuntimeException: java.lang.ClassCastException: org.apache.hadoop.hive.common.type.HiveChar cannot be cast to java.lang.String

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      DeprecatedParquetHiveInput : size of object differs. Value size : 23, Current Object size : 29

      at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next()
    2. Hive Query Language
      HiveContextAwareRecordReader.next
      1. org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:199)
      2. org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:52)
      3. org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
      4. org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:105)
      5. org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
      6. org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
      6 frames
    3. Hive Shims
      HadoopShimsSecure$CombineFileRecordReader.next
      1. org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:224)
      2. org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:136)
      2 frames
    4. Hadoop
      YarnChild$2.run
      1. org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
      2. org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185)
      3. org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
      4. org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
      5. org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
      6. org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
      6 frames
    5. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:415)
      2 frames
    6. Hadoop
      UserGroupInformation.doAs
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
      1 frame
    7. Hadoop
      YarnChild.main
      1. org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
      1 frame