java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable

Apache's JIRA Issue Tracker | Sushanth Sowmyan | 2 years ago
  1. 0

    Attempting to write to a HCatalog defined table backed by the AvroSerde fails with the following stacktrace: {code} java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable at org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84) at org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253) at org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53) at org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242) at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85) {code} The proximal cause of this failure is that the AvroContainerOutputFormat's signature mandates a LongWritable key and HCat's FileRecordWriterContainer forces a NullWritable. I'm not sure of a general fix, other than redefining HiveOutputFormat to mandate a WritableComparable. It looks like accepting WritableComparable is what's done in the other Hive OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also be changed, since it's ignoring the key. That way fixing things so FileRecordWriterContainer can always use NullWritable could get spun into a different issue? The underlying cause for failure to write to AvroSerde tables is that AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so fixing the above will just push the failure into the placeholder RecordWriter.

    Apache's JIRA Issue Tracker | 2 years ago | Sushanth Sowmyan
    java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable
  2. 0

    Writing to Avro backed HCatalog table using mapreduce

    Stack Overflow | 2 years ago
    java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable
  3. 0

    {noformat} SELECT SUM(L_QUANTITY), (SUM(L_QUANTITY) + -1.30000000000000000000E+000), (-2.20000000000000020000E+000 % (SUM(L_QUANTITY) + -1.30000000000000000000E+000)), MIN(L_EXTENDEDPRICE) FROM lineitem_orc WHERE ((L_EXTENDEDPRICE <= L_LINENUMBER) OR (L_TAX > L_EXTENDEDPRICE)); {noformat} executed over tpch line item with scale factor 1gb {noformat} 13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201306142329_0098, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job -kill job_201306142329_0098 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2013-06-15 11:19:47,490 Stage-1 map = 0%, reduce = 0% 2013-06-15 11:20:29,801 Stage-1 map = 76%, reduce = 0% 2013-06-15 11:20:32,849 Stage-1 map = 0%, reduce = 0% 2013-06-15 11:20:35,880 Stage-1 map = 100%, reduce = 100% Ended Job = job_201306142329_0098 with errors Error during job, obtaining debugging information... Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098 Examining task ID: task_201306142329_0098_m_000002 (and more) from job job_201306142329_0098 Task with the most failures(4): ----- Task ID: task_201306142329_0098_m_000000 URL: http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098&tipid=task_201306142329_0098_m_000000 ----- Diagnostic Messages for this Task: java.lang.RuntimeException: Hive Runtime Error while closing operators at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:271) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135) at org.apache.hadoop.mapred.Child.main(Child.java:265) Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204) at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832) at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281) at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196) ... 8 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec {noformat} Similar issues seen with other queries: {noformat} SELECT VAR_SAMP(L_SUPPKEY), (VAR_SAMP(L_SUPPKEY) - -2.20000000000000020000E+000), (-(VAR_SAMP(L_SUPPKEY))) FROM lineitem_orc WHERE ((L_SUPPKEY = -1)); {noformat} {noformat} 13/06/15 11:41:08 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_6976@SLAVE23-WIN_201306151141_1255577417.txt Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201306142329_0109, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0109 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job -kill job_201306142329_0109 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2013-06-15 11:41:38,753 Stage-1 map = 0%, reduce = 0% 2013-06-15 11:42:17,959 Stage-1 map = 100%, reduce = 100% Ended Job = job_201306142329_0109 with errors Error during job, obtaining debugging information... Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0109 Examining task ID: task_201306142329_0109_m_000002 (and more) from job job_201306142329_0109 Task with the most failures(4): ----- Task ID: task_201306142329_0109_m_000000 URL: http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0109&tipid=task_201306142329_0109_m_000000 ----- Diagnostic Messages for this Task: java.lang.RuntimeException: Hive Runtime Error while closing operators at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:271) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135) at org.apache.hadoop.mapred.Child.main(Child.java:265) Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to [Ljava.lang.Object; at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldData(StandardStructObjectInspector.java:166) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:248) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:534) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204) at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832) at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281) at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196) ... 8 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec {noformat} {noformat} SELECT MAX(L_ORDERKEY), (MAX(L_ORDERKEY) + 2) FROM lineitem_orc WHERE ((L_ORDERKEY <= L_DISCOUNT)); {noformat} {noformat} 13/06/15 11:55:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties Hive history file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_3208@SLAVE23-WIN_201306151155_919702960.txt Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201306142329_0114, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0114 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job -kill job_201306142329_0114 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2013-06-15 11:55:34,191 Stage-1 map = 0%, reduce = 0% 2013-06-15 11:56:14,464 Stage-1 map = 100%, reduce = 100% Ended Job = job_201306142329_0114 with errors Error during job, obtaining debugging information... Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0114 Examining task ID: task_201306142329_0114_m_000002 (and more) from job job_201306142329_0114 Task with the most failures(4): ----- Task ID: task_201306142329_0114_m_000000 URL: http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0114&tipid=task_201306142329_0114_m_000000 ----- Diagnostic Messages for this Task: java.lang.RuntimeException: Hive Runtime Error while closing operators at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.Child$4.run(Child.java:271) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135) at org.apache.hadoop.mapred.Child.main(Child.java:265) Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableLongObjectInspector.get(WritableLongObjectInspector.java:35) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:325) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257) at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204) at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832) at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281) at org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597) at org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196) ... 8 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec {noformat}

    Apache's JIRA Issue Tracker | 3 years ago | Tony Murphy
    java.lang.RuntimeException: Hive Runtime Error while closing operators
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    MapReduce with phoenix : org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.NullWritable

    Stack Overflow | 12 months ago | Lucie M
    java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.NullWritable

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.ClassCastException

      org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop.io.LongWritable

      at org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write()
    2. Hive Query Language
      AvroContainerOutputFormat$1.write
      1. org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
      1 frame
    3. org.apache.hcatalog
      HCatStorer.putNext
      1. org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
      2. org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
      3. org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
      4. org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
      4 frames
    4. org.apache.pig
      PigOutputFormat$PigRecordWriter.write
      1. org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
      2. org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
      2 frames
    5. Hadoop
      TaskInputOutputContextImpl.write
      1. org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
      2. org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
      2 frames