java.lang.ExceptionInInitializerError

Apache's JIRA Issue Tracker | Svend Vanderveken | 2 years ago
  1. 0

    Execution of SQL query against HDFS systematically throws a class not found exception on slave nodes when executing . (this was originally reported on the user list: http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tc10135.html) Sample code (ran from spark-shell): {code} val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class Car(timestamp: Long, objectid: String, isGreen: Boolean) // I get the same error when pointing to the folder "hdfs://vm28:8020/test/cardata" val data = sc.textFile("hdfs://vm28:8020/test/cardata/part-00000") val cars = data.map(_.split(",")).map ( ar => Car(ar(0).toLong, ar(1), ar(2).toBoolean)) cars.registerAsTable("mcars") val allgreens = sqlContext.sql("SELECT objectid from mcars where isGreen = true") allgreens.collect.take(10).foreach(println) {code} Stack trace on the slave nodes: {code} I0716 13:01:16.215158 13631 exec.cpp:131] Version: 0.19.0 I0716 13:01:16.219285 13656 exec.cpp:205] Executor registered on slave 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO MesosExecutorBackend: Registered with Mesos as executor ID 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO SecurityManager: Changing view acls to: mesos,mnubohadoop 14/07/16 13:01:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mesos, mnubohadoop) 14/07/16 13:01:17 INFO Slf4jLogger: Slf4jLogger started 14/07/16 13:01:17 INFO Remoting: Starting remoting 14/07/16 13:01:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO SparkEnv: Connecting to MapOutputTracker: akka.tcp://spark@vm28:41632/user/MapOutputTracker 14/07/16 13:01:17 INFO SparkEnv: Connecting to BlockManagerMaster: akka.tcp://spark@vm28:41632/user/BlockManagerMaster 14/07/16 13:01:17 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140716130117-8ea0 14/07/16 13:01:17 INFO MemoryStore: MemoryStore started with capacity 294.9 MB. 14/07/16 13:01:17 INFO ConnectionManager: Bound socket to port 44501 with id = ConnectionManagerId(vm23-hulk-priv.mtl.mnubo.com,44501) 14/07/16 13:01:17 INFO BlockManagerMaster: Trying to register BlockManager 14/07/16 13:01:17 INFO BlockManagerMaster: Registered BlockManager 14/07/16 13:01:17 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ccf6f36c-2541-4a25-8fe4-bb4ba00ee633 14/07/16 13:01:17 INFO HttpServer: Starting HTTP Server 14/07/16 13:01:18 INFO Executor: Using REPL class URI: http://vm28:33973 14/07/16 13:01:18 INFO Executor: Running task ID 2 14/07/16 13:01:18 INFO HttpBroadcast: Started reading broadcast variable 0 14/07/16 13:01:18 INFO MemoryStore: ensureFreeSpace(125590) called with curMem=0, maxMem=309225062 14/07/16 13:01:18 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 122.6 KB, free 294.8 MB) 14/07/16 13:01:18 INFO HttpBroadcast: Reading broadcast variable 0 took 0.294602722 s 14/07/16 13:01:19 INFO HadoopRDD: Input split: hdfs://vm28:8020/test/cardata/part-00000:23960450+23960451 I0716 13:01:19.905113 13657 exec.cpp:378] Executor asked to shutdown 14/07/16 13:01:20 ERROR Executor: Exception in task ID 2 java.lang.NoClassDefFoundError: $line11/$read$ at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$1.next(Iterator.scala:853) at scala.collection.Iterator$$anon$1.head(Iterator.scala:840) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) at org.apache.spark.scheduler.Task.run(Task.scala:51) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: $line11.$read$ at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:65) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 27 more Caused by: java.lang.ClassNotFoundException: $line11.$read$ at java.lang.ClassLoader.findClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.scala:26) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.scala:30) at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:60) ... 29 more {code} Note that running a simple map+reduce job on the same hdfs files with the same installation works fine: {code} # this works val data = sc.textFile("hdfs://vm28:8020/test/cardata/") val lineLengths = data.map(s => s.length) val totalLength = lineLengths.reduce((a, b) => a + b) {code} The hdfs files contain just plain csv files: {code} $ hdfs dfs -tail /test/cardata/part-00000 14/07/16 13:18:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 1396396560000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,0.0,0.0,0.0,0.0,38.24645296229051,99.41880649743675,26.619177092584696 1396396620000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,1.3637951832478066,0.5913309707002152,56.6895043678199,96.54451566032114,100.76632815433682,92.29189473832957,7.009760456230157 1396396680000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,-3.405565593143888,0.8104753585926928,41.677424397834905,36.57019235002255,8.974008103729105,92.94054149986701,11.673872282136195 1396396740000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,2.6548062807597854,0.6180832371072019,40.88058181777176,24.47455760837969,37.42027121601756,93.97373842452362,16.48937328407166 {code} spark-env.sh look like this: {code} export SPARK_LOCAL_IP=vm28 export MESOS_NATIVE_LIBRARY=/usr/local/etc/mesos-0.19.0/build/src/.libs/libmesos.so export SPARK_EXECUTOR_URI=hdfs://vm28:8020/apps/spark/spark-1.0.1-2.3.0-mr1-cdh5.0.2-hive.tgz {code}

    Apache's JIRA Issue Tracker | 2 years ago | Svend Vanderveken
    java.lang.ExceptionInInitializerError
  2. 0

    Execution of SQL query against HDFS systematically throws a class not found exception on slave nodes when executing . (this was originally reported on the user list: http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tc10135.html) Sample code (ran from spark-shell): {code} val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class Car(timestamp: Long, objectid: String, isGreen: Boolean) // I get the same error when pointing to the folder "hdfs://vm28:8020/test/cardata" val data = sc.textFile("hdfs://vm28:8020/test/cardata/part-00000") val cars = data.map(_.split(",")).map ( ar => Car(ar(0).toLong, ar(1), ar(2).toBoolean)) cars.registerAsTable("mcars") val allgreens = sqlContext.sql("SELECT objectid from mcars where isGreen = true") allgreens.collect.take(10).foreach(println) {code} Stack trace on the slave nodes: {code} I0716 13:01:16.215158 13631 exec.cpp:131] Version: 0.19.0 I0716 13:01:16.219285 13656 exec.cpp:205] Executor registered on slave 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO MesosExecutorBackend: Registered with Mesos as executor ID 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO SecurityManager: Changing view acls to: mesos,mnubohadoop 14/07/16 13:01:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mesos, mnubohadoop) 14/07/16 13:01:17 INFO Slf4jLogger: Slf4jLogger started 14/07/16 13:01:17 INFO Remoting: Starting remoting 14/07/16 13:01:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO SparkEnv: Connecting to MapOutputTracker: akka.tcp://spark@vm28:41632/user/MapOutputTracker 14/07/16 13:01:17 INFO SparkEnv: Connecting to BlockManagerMaster: akka.tcp://spark@vm28:41632/user/BlockManagerMaster 14/07/16 13:01:17 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140716130117-8ea0 14/07/16 13:01:17 INFO MemoryStore: MemoryStore started with capacity 294.9 MB. 14/07/16 13:01:17 INFO ConnectionManager: Bound socket to port 44501 with id = ConnectionManagerId(vm23-hulk-priv.mtl.mnubo.com,44501) 14/07/16 13:01:17 INFO BlockManagerMaster: Trying to register BlockManager 14/07/16 13:01:17 INFO BlockManagerMaster: Registered BlockManager 14/07/16 13:01:17 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ccf6f36c-2541-4a25-8fe4-bb4ba00ee633 14/07/16 13:01:17 INFO HttpServer: Starting HTTP Server 14/07/16 13:01:18 INFO Executor: Using REPL class URI: http://vm28:33973 14/07/16 13:01:18 INFO Executor: Running task ID 2 14/07/16 13:01:18 INFO HttpBroadcast: Started reading broadcast variable 0 14/07/16 13:01:18 INFO MemoryStore: ensureFreeSpace(125590) called with curMem=0, maxMem=309225062 14/07/16 13:01:18 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 122.6 KB, free 294.8 MB) 14/07/16 13:01:18 INFO HttpBroadcast: Reading broadcast variable 0 took 0.294602722 s 14/07/16 13:01:19 INFO HadoopRDD: Input split: hdfs://vm28:8020/test/cardata/part-00000:23960450+23960451 I0716 13:01:19.905113 13657 exec.cpp:378] Executor asked to shutdown 14/07/16 13:01:20 ERROR Executor: Exception in task ID 2 java.lang.NoClassDefFoundError: $line11/$read$ at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$1.next(Iterator.scala:853) at scala.collection.Iterator$$anon$1.head(Iterator.scala:840) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) at org.apache.spark.scheduler.Task.run(Task.scala:51) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: $line11.$read$ at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:65) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 27 more Caused by: java.lang.ClassNotFoundException: $line11.$read$ at java.lang.ClassLoader.findClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.scala:26) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.scala:30) at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:60) ... 29 more {code} Note that running a simple map+reduce job on the same hdfs files with the same installation works fine: {code} # this works val data = sc.textFile("hdfs://vm28:8020/test/cardata/") val lineLengths = data.map(s => s.length) val totalLength = lineLengths.reduce((a, b) => a + b) {code} The hdfs files contain just plain csv files: {code} $ hdfs dfs -tail /test/cardata/part-00000 14/07/16 13:18:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 1396396560000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,0.0,0.0,0.0,0.0,38.24645296229051,99.41880649743675,26.619177092584696 1396396620000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,1.3637951832478066,0.5913309707002152,56.6895043678199,96.54451566032114,100.76632815433682,92.29189473832957,7.009760456230157 1396396680000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,-3.405565593143888,0.8104753585926928,41.677424397834905,36.57019235002255,8.974008103729105,92.94054149986701,11.673872282136195 1396396740000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,2.6548062807597854,0.6180832371072019,40.88058181777176,24.47455760837969,37.42027121601756,93.97373842452362,16.48937328407166 {code} spark-env.sh look like this: {code} export SPARK_LOCAL_IP=vm28 export MESOS_NATIVE_LIBRARY=/usr/local/etc/mesos-0.19.0/build/src/.libs/libmesos.so export SPARK_EXECUTOR_URI=hdfs://vm28:8020/apps/spark/spark-1.0.1-2.3.0-mr1-cdh5.0.2-hive.tgz {code}

    Apache's JIRA Issue Tracker | 2 years ago | Svend Vanderveken
    java.lang.ExceptionInInitializerError
  3. 0

    Android: Saving Map State in Google map

    Stack Overflow | 11 months ago | Junie Negentien
    java.lang.RuntimeException: Unable to resume activity {com.ourThesis.junieNegentien2015/com.ourThesis.junieNegentien2015.MainActivity}: java.lang.NullPointerException
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.NullPointerException

      No message provided

      at $line3.$read$$iwC$$iwC.<init>()
    2. $line3
      $read$.<clinit>
      1. $line3.$read$$iwC$$iwC.<init>(<console>:8)
      2. $line3.$read$$iwC.<init>(<console>:14)
      3. $line3.$read.<init>(<console>:16)
      4. $line3.$read$.<init>(<console>:20)
      5. $line3.$read$.<clinit>(<console>)
      5 frames
    3. $line10
      $read$.<clinit>
      1. $line10.$read$$iwC.<init>(<console>:6)
      2. $line10.$read.<init>(<console>:26)
      3. $line10.$read$.<init>(<console>:30)
      4. $line10.$read$.<clinit>(<console>)
      4 frames
    4. $line12
      $read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply
      1. $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
      2. $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
      2 frames
    5. Scala
      Iterator$$anon$1.head
      1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2. scala.collection.Iterator$$anon$1.next(Iterator.scala:853)
      3. scala.collection.Iterator$$anon$1.head(Iterator.scala:840)
      3 frames
    6. Spark Project SQL
      ExistingRdd$$anonfun$productToRowRdd$1.apply
      1. org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181)
      2. org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176)
      2 frames
    7. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
      2. org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      9. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      11. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      12. org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      14. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      15. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
      16. org.apache.spark.scheduler.Task.run(Task.scala:51)
      17. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
      17 frames
    8. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
      3. java.lang.Thread.run(Unknown Source)
      3 frames