org.apache.hadoop.hive.serde2.SerDeException: java.lang.NullPointerException

github.com | 4 months ago
  1. Speed up your debug routine!

    Automated exception search integrated into your IDE

  2. 0

    Serde crashes on valid JSON

    GitHub | 2 years ago | btubbs
    java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable {"requester_email": "yyy@yyy.com", "status": "closed", "via": "email", "title": "xxx@xxx has a new email address", "event_id": 2902, "tags": ["auto_reply"], "latest_comment": "TrueSwitch\n xxx@xxx has a new e-mail address\n\n Hello, I have switched my e-mail address from xxx@xxx to yyy@yyy.com. Please be sure to update your Address Book and use my new e-mail address from now on. Thank You! yyy@yyy.com \n Note: This message was sent by TrueSwitch at the request of \n yyy@yyy.com \n Try TrueSwitch next time\n you plan to switch your e-mail or Internet account:\n\n http://www.trueswitch.com", "group_name": "US Support", "assignee_id": null, "url": "https://yougov.zendesk.com/tickets/169756", "assignee_email": null, "ticket_id": 169756, "time": "2012-05-25T05:09:35+00:00", "group_id": 13200}
  3. 0

    Processing Logs in Hive – Hadoop Online Tutorials

    hadooptutorial.info | 4 months ago
    java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable 64.242.88.10 - - [07/Mar/2014:16:05:49 -0800] "GET /twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables HTTP/1.1" 401 12846

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.hadoop.hive.serde2.SerDeException

      java.lang.NullPointerException

      at org.apache.hadoop.hive.serde2.thrift.ThriftDeserializer.initialize()
    2. Hive Serde
      ThriftDeserializer.initialize
      1. org.apache.hadoop.hive.serde2.thrift.ThriftDeserializer.initialize(ThriftDeserializer.java:68)
      1 frame
    3. Hive Query Language
      TableDesc.getDeserializer
      1. org.apache.hadoop.hive.ql.plan.TableDesc.getDeserializer(TableDesc.java:80)
      1 frame
    4. Spark Project Hive
      HiveStrategies$HiveTableScans$$anonfun$14.apply
      1. org.apache.spark.sql.hive.execution.HiveTableScan.addColumnMetadataToConf(HiveTableScan.scala:86)
      2. org.apache.spark.sql.hive.execution.HiveTableScan.<init>(HiveTableScan.scala:100)
      3. org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188)
      4. org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$14.apply(HiveStrategies.scala:188)
      4 frames
    5. Spark Project SQL
      SQLContext$SparkPlanner.pruneFilterProject
      1. org.apache.spark.sql.SQLContext$SparkPlanner.pruneFilterProject(SQLContext.scala:364)
      1 frame
    6. Spark Project Hive
      HiveStrategies$HiveTableScans$.apply
      1. org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:184)
      1 frame
    7. Spark Project Catalyst
      QueryPlanner$$anonfun$1.apply
      1. org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
      2. org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
      2 frames
    8. Scala
      Iterator$$anon$13.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      1 frame
    9. Spark Project Catalyst
      QueryPlanner.planLater
      1. org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
      2. org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
      2 frames
    10. Spark Project SQL
      SparkStrategies$BasicOperators$.apply
      1. org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:280)
      1 frame
    11. Spark Project Catalyst
      QueryPlanner$$anonfun$1.apply
      1. org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
      2. org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
      2 frames
    12. Scala
      Iterator$$anon$13.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      1 frame
    13. Spark Project Catalyst
      QueryPlanner.apply
      1. org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
      1 frame
    14. Spark Project SQL
      SQLContext$QueryExecution.executedPlan
      1. org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:402)
      2. org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:400)
      3. org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:406)
      4. org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:406)
      4 frames
    15. Spark Project Hive
      HiveContext$QueryExecution.stringResult
      1. org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:406)
      1 frame
    16. org.apache.spark
      SparkSQLCLIDriver.processCmd
      1. org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59)
      2. org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
      2 frames
    17. org.apache.hadoop
      CliDriver.processLine
      1. org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
      1 frame
    18. org.apache.spark
      SparkSQLCLIDriver.main
      1. org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
      2. org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
      2 frames
    19. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
      4. java.lang.reflect.Method.invoke(Unknown Source)
      4 frames
    20. Spark
      SparkSubmit.main
      1. org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
      2. org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
      3. org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      3 frames