java.lang.NoClassDefFoundError: org/apache/spark/Logging

GitHub | davidbernick | 2 months ago
  1. 0

    Spark nightly tests failing

    GitHub | 2 months ago | davidbernick
    java.lang.NoClassDefFoundError: org/apache/spark/Logging
  2. 0

    Error in the new Ehanced Dataframe widget

    GitHub | 1 year ago | jarutis
    java.lang.ClassNotFoundException: notebook.front.widgets.DataFrameView$$anonfun$3
  3. 0

    Execution of SQL query against HDFS systematically throws a class not found exception on slave nodes when executing . (this was originally reported on the user list: http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tc10135.html) Sample code (ran from spark-shell): {code} val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class Car(timestamp: Long, objectid: String, isGreen: Boolean) // I get the same error when pointing to the folder "hdfs://vm28:8020/test/cardata" val data = sc.textFile("hdfs://vm28:8020/test/cardata/part-00000") val cars = data.map(_.split(",")).map ( ar => Car(ar(0).toLong, ar(1), ar(2).toBoolean)) cars.registerAsTable("mcars") val allgreens = sqlContext.sql("SELECT objectid from mcars where isGreen = true") allgreens.collect.take(10).foreach(println) {code} Stack trace on the slave nodes: {code} I0716 13:01:16.215158 13631 exec.cpp:131] Version: 0.19.0 I0716 13:01:16.219285 13656 exec.cpp:205] Executor registered on slave 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO MesosExecutorBackend: Registered with Mesos as executor ID 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO SecurityManager: Changing view acls to: mesos,mnubohadoop 14/07/16 13:01:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mesos, mnubohadoop) 14/07/16 13:01:17 INFO Slf4jLogger: Slf4jLogger started 14/07/16 13:01:17 INFO Remoting: Starting remoting 14/07/16 13:01:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO SparkEnv: Connecting to MapOutputTracker: akka.tcp://spark@vm28:41632/user/MapOutputTracker 14/07/16 13:01:17 INFO SparkEnv: Connecting to BlockManagerMaster: akka.tcp://spark@vm28:41632/user/BlockManagerMaster 14/07/16 13:01:17 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140716130117-8ea0 14/07/16 13:01:17 INFO MemoryStore: MemoryStore started with capacity 294.9 MB. 14/07/16 13:01:17 INFO ConnectionManager: Bound socket to port 44501 with id = ConnectionManagerId(vm23-hulk-priv.mtl.mnubo.com,44501) 14/07/16 13:01:17 INFO BlockManagerMaster: Trying to register BlockManager 14/07/16 13:01:17 INFO BlockManagerMaster: Registered BlockManager 14/07/16 13:01:17 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ccf6f36c-2541-4a25-8fe4-bb4ba00ee633 14/07/16 13:01:17 INFO HttpServer: Starting HTTP Server 14/07/16 13:01:18 INFO Executor: Using REPL class URI: http://vm28:33973 14/07/16 13:01:18 INFO Executor: Running task ID 2 14/07/16 13:01:18 INFO HttpBroadcast: Started reading broadcast variable 0 14/07/16 13:01:18 INFO MemoryStore: ensureFreeSpace(125590) called with curMem=0, maxMem=309225062 14/07/16 13:01:18 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 122.6 KB, free 294.8 MB) 14/07/16 13:01:18 INFO HttpBroadcast: Reading broadcast variable 0 took 0.294602722 s 14/07/16 13:01:19 INFO HadoopRDD: Input split: hdfs://vm28:8020/test/cardata/part-00000:23960450+23960451 I0716 13:01:19.905113 13657 exec.cpp:378] Executor asked to shutdown 14/07/16 13:01:20 ERROR Executor: Exception in task ID 2 java.lang.NoClassDefFoundError: $line11/$read$ at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$1.next(Iterator.scala:853) at scala.collection.Iterator$$anon$1.head(Iterator.scala:840) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) at org.apache.spark.scheduler.Task.run(Task.scala:51) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: $line11.$read$ at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:65) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 27 more Caused by: java.lang.ClassNotFoundException: $line11.$read$ at java.lang.ClassLoader.findClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.scala:26) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.scala:30) at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:60) ... 29 more {code} Note that running a simple map+reduce job on the same hdfs files with the same installation works fine: {code} # this works val data = sc.textFile("hdfs://vm28:8020/test/cardata/") val lineLengths = data.map(s => s.length) val totalLength = lineLengths.reduce((a, b) => a + b) {code} The hdfs files contain just plain csv files: {code} $ hdfs dfs -tail /test/cardata/part-00000 14/07/16 13:18:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 1396396560000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,0.0,0.0,0.0,0.0,38.24645296229051,99.41880649743675,26.619177092584696 1396396620000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,1.3637951832478066,0.5913309707002152,56.6895043678199,96.54451566032114,100.76632815433682,92.29189473832957,7.009760456230157 1396396680000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,-3.405565593143888,0.8104753585926928,41.677424397834905,36.57019235002255,8.974008103729105,92.94054149986701,11.673872282136195 1396396740000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,2.6548062807597854,0.6180832371072019,40.88058181777176,24.47455760837969,37.42027121601756,93.97373842452362,16.48937328407166 {code} spark-env.sh look like this: {code} export SPARK_LOCAL_IP=vm28 export MESOS_NATIVE_LIBRARY=/usr/local/etc/mesos-0.19.0/build/src/.libs/libmesos.so export SPARK_EXECUTOR_URI=hdfs://vm28:8020/apps/spark/spark-1.0.1-2.3.0-mr1-cdh5.0.2-hive.tgz {code}

    Apache's JIRA Issue Tracker | 2 years ago | Svend Vanderveken
    java.lang.NoClassDefFoundError: $line11/$read$
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Execution of SQL query against HDFS systematically throws a class not found exception on slave nodes when executing . (this was originally reported on the user list: http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tc10135.html) Sample code (ran from spark-shell): {code} val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class Car(timestamp: Long, objectid: String, isGreen: Boolean) // I get the same error when pointing to the folder "hdfs://vm28:8020/test/cardata" val data = sc.textFile("hdfs://vm28:8020/test/cardata/part-00000") val cars = data.map(_.split(",")).map ( ar => Car(ar(0).toLong, ar(1), ar(2).toBoolean)) cars.registerAsTable("mcars") val allgreens = sqlContext.sql("SELECT objectid from mcars where isGreen = true") allgreens.collect.take(10).foreach(println) {code} Stack trace on the slave nodes: {code} I0716 13:01:16.215158 13631 exec.cpp:131] Version: 0.19.0 I0716 13:01:16.219285 13656 exec.cpp:205] Executor registered on slave 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO MesosExecutorBackend: Registered with Mesos as executor ID 20140714-142853-485682442-5050-25487-2 14/07/16 13:01:16 INFO SecurityManager: Changing view acls to: mesos,mnubohadoop 14/07/16 13:01:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mesos, mnubohadoop) 14/07/16 13:01:17 INFO Slf4jLogger: Slf4jLogger started 14/07/16 13:01:17 INFO Remoting: Starting remoting 14/07/16 13:01:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@vm23:38230] 14/07/16 13:01:17 INFO SparkEnv: Connecting to MapOutputTracker: akka.tcp://spark@vm28:41632/user/MapOutputTracker 14/07/16 13:01:17 INFO SparkEnv: Connecting to BlockManagerMaster: akka.tcp://spark@vm28:41632/user/BlockManagerMaster 14/07/16 13:01:17 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140716130117-8ea0 14/07/16 13:01:17 INFO MemoryStore: MemoryStore started with capacity 294.9 MB. 14/07/16 13:01:17 INFO ConnectionManager: Bound socket to port 44501 with id = ConnectionManagerId(vm23-hulk-priv.mtl.mnubo.com,44501) 14/07/16 13:01:17 INFO BlockManagerMaster: Trying to register BlockManager 14/07/16 13:01:17 INFO BlockManagerMaster: Registered BlockManager 14/07/16 13:01:17 INFO HttpFileServer: HTTP File server directory is /tmp/spark-ccf6f36c-2541-4a25-8fe4-bb4ba00ee633 14/07/16 13:01:17 INFO HttpServer: Starting HTTP Server 14/07/16 13:01:18 INFO Executor: Using REPL class URI: http://vm28:33973 14/07/16 13:01:18 INFO Executor: Running task ID 2 14/07/16 13:01:18 INFO HttpBroadcast: Started reading broadcast variable 0 14/07/16 13:01:18 INFO MemoryStore: ensureFreeSpace(125590) called with curMem=0, maxMem=309225062 14/07/16 13:01:18 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 122.6 KB, free 294.8 MB) 14/07/16 13:01:18 INFO HttpBroadcast: Reading broadcast variable 0 took 0.294602722 s 14/07/16 13:01:19 INFO HadoopRDD: Input split: hdfs://vm28:8020/test/cardata/part-00000:23960450+23960451 I0716 13:01:19.905113 13657 exec.cpp:378] Executor asked to shutdown 14/07/16 13:01:20 ERROR Executor: Exception in task ID 2 java.lang.NoClassDefFoundError: $line11/$read$ at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$1.next(Iterator.scala:853) at scala.collection.Iterator$$anon$1.head(Iterator.scala:840) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181) at org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) at org.apache.spark.scheduler.Task.run(Task.scala:51) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: $line11.$read$ at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:65) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 27 more Caused by: java.lang.ClassNotFoundException: $line11.$read$ at java.lang.ClassLoader.findClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.scala:26) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.scala:30) at org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:60) ... 29 more {code} Note that running a simple map+reduce job on the same hdfs files with the same installation works fine: {code} # this works val data = sc.textFile("hdfs://vm28:8020/test/cardata/") val lineLengths = data.map(s => s.length) val totalLength = lineLengths.reduce((a, b) => a + b) {code} The hdfs files contain just plain csv files: {code} $ hdfs dfs -tail /test/cardata/part-00000 14/07/16 13:18:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 1396396560000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,0.0,0.0,0.0,0.0,38.24645296229051,99.41880649743675,26.619177092584696 1396396620000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,1.3637951832478066,0.5913309707002152,56.6895043678199,96.54451566032114,100.76632815433682,92.29189473832957,7.009760456230157 1396396680000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,-3.405565593143888,0.8104753585926928,41.677424397834905,36.57019235002255,8.974008103729105,92.94054149986701,11.673872282136195 1396396740000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New Caledonia,1970,2.6548062807597854,0.6180832371072019,40.88058181777176,24.47455760837969,37.42027121601756,93.97373842452362,16.48937328407166 {code} spark-env.sh look like this: {code} export SPARK_LOCAL_IP=vm28 export MESOS_NATIVE_LIBRARY=/usr/local/etc/mesos-0.19.0/build/src/.libs/libmesos.so export SPARK_EXECUTOR_URI=hdfs://vm28:8020/apps/spark/spark-1.0.1-2.3.0-mr1-cdh5.0.2-hive.tgz {code}

    Apache's JIRA Issue Tracker | 2 years ago | Svend Vanderveken
    java.lang.NoClassDefFoundError: $line11/$read$
  6. 0

    Running a sub-project main class

    Stack Overflow | 1 year ago | joslinm
    java.lang.ClassNotFoundException: maslow.akka.cluster.node.ClusterNode

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.ClassNotFoundException

      org.apache.spark.Logging

      at java.lang.ClassLoader.findClass()
    2. Java RT
      ClassLoader.findClass
      1. java.lang.ClassLoader.findClass(ClassLoader.java:530)
      1 frame
    3. Spark
      ParentClassLoader.findClass
      1. org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.scala:26)
      1 frame
    4. Java RT
      ClassLoader.loadClass
      1. java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      1 frame
    5. Spark
      ChildFirstURLClassLoader.loadClass
      1. org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.scala:34)
      2. org.apache.spark.util.ChildFirstURLClassLoader.loadClass(MutableURLClassLoader.scala:55)
      2 frames
    6. Java RT
      ClassLoader.loadClass
      1. java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      2. java.lang.ClassLoader.defineClass1(Native Method)
      3. java.lang.ClassLoader.defineClass(ClassLoader.java:763)
      4. java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
      5. java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
      6. java.net.URLClassLoader.access$100(URLClassLoader.java:73)
      7. java.net.URLClassLoader$1.run(URLClassLoader.java:368)
      8. java.net.URLClassLoader$1.run(URLClassLoader.java:362)
      9. java.security.AccessController.doPrivileged(Native Method)
      10. java.net.URLClassLoader.findClass(URLClassLoader.java:361)
      11. java.lang.ClassLoader.loadClass(ClassLoader.java:424)
      11 frames
    7. Spark
      ChildFirstURLClassLoader.loadClass
      1. org.apache.spark.util.ChildFirstURLClassLoader.loadClass(MutableURLClassLoader.scala:52)
      1 frame
    8. Java RT
      ClassLoader.loadClass
      1. java.lang.ClassLoader.loadClass(ClassLoader.java:357)
      1 frame
    9. org.bdgenomics.adam
      ADAMKryoRegistrator.registerClasses
      1. org.bdgenomics.adam.serialization.ADAMKryoRegistrator.registerClasses(ADAMKryoRegistrator.scala:85)
      1 frame
    10. org.broadinstitute.hellbender
      GATKRegistrator.registerClasses
      1. org.broadinstitute.hellbender.engine.spark.GATKRegistrator.registerClasses(GATKRegistrator.java:74)
      1 frame
    11. Spark
      KryoSerializer$$anonfun$newKryo$6.apply
      1. org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$6.apply(KryoSerializer.scala:125)
      2. org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$6.apply(KryoSerializer.scala:125)
      2 frames
    12. Scala
      ArrayOps$ofRef.foreach
      1. scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
      2. scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
      2 frames
    13. Spark
      JavaSparkContext.newAPIHadoopFile
      1. org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:125)
      2. org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:274)
      3. org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:259)
      4. org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:175)
      5. org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:233)
      6. org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:107)
      7. org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:86)
      8. org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
      9. org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
      10. org.apache.spark.SparkContext.broadcast(SparkContext.scala:1370)
      11. org.apache.spark.rdd.NewHadoopRDD.<init>(NewHadoopRDD.scala:76)
      12. org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile$2.apply(SparkContext.scala:1074)
      13. org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile$2.apply(SparkContext.scala:1065)
      14. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      15. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      16. org.apache.spark.SparkContext.withScope(SparkContext.scala:682)
      17. org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:1065)
      18. org.apache.spark.api.java.JavaSparkContext.newAPIHadoopFile(JavaSparkContext.scala:474)
      18 frames
    14. org.broadinstitute.hellbender
      Main.main
      1. org.broadinstitute.hellbender.engine.spark.datasources.ReadsSparkSource.getParallelReads(ReadsSparkSource.java:104)
      2. org.broadinstitute.hellbender.engine.spark.GATKSparkTool.getUnfilteredReads(GATKSparkTool.java:235)
      3. org.broadinstitute.hellbender.engine.spark.GATKSparkTool.getReads(GATKSparkTool.java:209)
      4. org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark.runTool(MarkDuplicatesSpark.java:65)
      5. org.broadinstitute.hellbender.engine.spark.GATKSparkTool.runPipeline(GATKSparkTool.java:348)
      6. org.broadinstitute.hellbender.engine.spark.SparkCommandLineProgram.doWork(SparkCommandLineProgram.java:38)
      7. org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:109)
      8. org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:167)
      9. org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:186)
      10. org.broadinstitute.hellbender.Main.instanceMain(Main.java:76)
      11. org.broadinstitute.hellbender.Main.main(Main.java:92)
      11 frames
    15. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:498)
      4 frames
    16. Spark
      SparkSubmit.main
      1. org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
      2. org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
      3. org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
      4. org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
      5. org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      5 frames