java.lang.NullPointerException

tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    Write textfile from spark streaming

    Stack Overflow | 6 months ago | Somasundaram Sekar
    java.lang.NullPointerException
  2. Speed up your debug routine!

    Automated exception search integrated into your IDE

  3. 0

    When running sc.wholeTextFiles() on a directory, I can run the command but not do anything with the resulting RDD – specifically, I get an error in py4j.protocol.Py4JJavaError; the error is unspecified. This occurs even if I can read the text file(s) individually with sc.textFile() Steps followed: 1) Download Spark 1.1.0 (pre-builet for Hadoop 2.4: [spark-1.1.0-bin-hadoop2.4.tgz|http://d3kbcqa49mib13.cloudfront.net/spark-1.1.0-bin-hadoop2.4.tgz]) 2) Extract into folder at root of drive: **D:\spark** 3) Create test folder at **D:\testdata** with one (HTML) file contained within it. 4) Launch PySpark at **bin\PySpark** 5) Try to use sc.wholeTextFiles('d:/testdata'); fail. Note: I followed instructions from the upcoming O'Reilly book [Learning Spark|http://shop.oreilly.com/product/0636920028512.do] for this. I do not have any related tools installed (e.g. Hadoop) on the Windows machine. See session (below)with tracebacks from errors. {noformat} Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 1.1.0 /_/ Using Python version 2.7.7 (default, Jun 11 2014 10:40:02) SparkContext available as sc. >>> file = sc.textFile("d:/testdata/0000cbcc5b470ec06f212990c68c8f76e887b884") >>> file.count() 732 >>> file.first() u'<!DOCTYPE html>' >>> data = sc.wholeTextFiles('d:/testdata') >>> data.first() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\spark\python\pyspark\rdd.py", line 1167, in first return self.take(1)[0] File "D:\spark\python\pyspark\rdd.py", line 1126, in take totalParts = self._jrdd.partitions().size() File "D:\spark\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py", line 538, in __call__ File "D:\spark\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o21.partitions. : java.lang.NullPointerException at java.lang.ProcessBuilder.start(Unknown Source) at org.apache.hadoop.util.Shell.runCommand(Shell.java:445) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.util.Shell.execCommand(Shell.java:739) at org.apache.hadoop.util.Shell.execCommand(Shell.java:722) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:559) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534) at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:42) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1697) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1679) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:302) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:263) at org.apache.spark.input.WholeTextFileInputFormat.setMaxSplitSize(WholeTextFileInputFormat.scala:54) at org.apache.spark.rdd.WholeTextFileRDD.getPartitions(NewHadoopRDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:50) at org.apache.spark.api.java.JavaPairRDD.partitions(JavaPairRDD.scala:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:207) at java.lang.Thread.run(Unknown Source) >>> data.count() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\spark\python\pyspark\rdd.py", line 847, in count return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum() File "D:\spark\python\pyspark\rdd.py", line 838, in sum return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add) File "D:\spark\python\pyspark\rdd.py", line 759, in reduce vals = self.mapPartitions(func).collect() File "D:\spark\python\pyspark\rdd.py", line 723, in collect bytesInJava = self._jrdd.collect().iterator() File "D:\spark\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py", line 538, in __call__ File "D:\spark\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o28.collect. : java.lang.NullPointerException at java.lang.ProcessBuilder.start(Unknown Source) at org.apache.hadoop.util.Shell.runCommand(Shell.java:445) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.util.Shell.execCommand(Shell.java:739) at org.apache.hadoop.util.Shell.execCommand(Shell.java:722) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:559) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534) at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:42) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1697) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1679) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:302) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:263) at org.apache.spark.input.WholeTextFileInputFormat.setMaxSplitSize(WholeTextFileInputFormat.scala:54) at org.apache.spark.rdd.WholeTextFileRDD.getPartitions(NewHadoopRDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:56) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135) at org.apache.spark.rdd.RDD.collect(RDD.scala:774) at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:305) at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:207) at java.lang.Thread.run(Unknown Source) >>> data.map(lambda x: len(x)).take(1) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\spark\python\pyspark\rdd.py", line 1126, in take totalParts = self._jrdd.partitions().size() File "D:\spark\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py", line 538, in __call__ File "D:\spark\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o61.partitions. : java.lang.NullPointerException at java.lang.ProcessBuilder.start(Unknown Source) at org.apache.hadoop.util.Shell.runCommand(Shell.java:445) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.util.Shell.execCommand(Shell.java:739) at org.apache.hadoop.util.Shell.execCommand(Shell.java:722) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:559) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534) at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:42) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1697) at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1679) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:302) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:263) at org.apache.spark.input.WholeTextFileInputFormat.setMaxSplitSize(WholeTextFileInputFormat.scala:54) at org.apache.spark.rdd.WholeTextFileRDD.getPartitions(NewHadoopRDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:56) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:202) at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:50) at org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379) at py4j.Gateway.invoke(Gateway.java:259) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:207) at java.lang.Thread.run(Unknown Source) {noformat}

    Apache's JIRA Issue Tracker | 2 years ago | Michael Griffiths
    java.lang.NullPointerException
  4. 0

    Twitter Streaming no output in REPL on windows

    Stack Overflow | 2 years ago
    java.lang.NullPointerException

  1. rp 1 times, last 3 months ago
  2. muffinmannen 3 times, last 12 months ago
18 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.NullPointerException

    No message provided

    at java.lang.ProcessBuilder.start()
  2. Java RT
    ProcessBuilder.start
    1. java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
    1 frame
  3. Hadoop
    FileSystem$4.next
    1. org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
    2. org.apache.hadoop.util.Shell.run(Shell.java:456)
    3. org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    4. org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
    5. org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
    6. org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
    7. org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:657)
    8. org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:632)
    9. org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:49)
    10. org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1729)
    11. org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1711)
    11 frames
  4. Hadoop
    FileInputFormat.getSplits
    1. org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:305)
    2. org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
    3. org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
    3 frames
  5. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:121)
    2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    3 frames
  6. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  7. Spark
    RDD.partitions
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    1 frame
  8. Spark Project Streaming
    FileInputDStream$$anonfun$5.apply
    1. org.apache.spark.streaming.dstream.FileInputDStream$$anonfun$5.apply(FileInputDStream.scala:285)
    2. org.apache.spark.streaming.dstream.FileInputDStream$$anonfun$5.apply(FileInputDStream.scala:275)
    2 frames
  9. Scala
    AbstractTraversable.map
    1. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    2. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    3. scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    4. scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
    5. scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    6. scala.collection.AbstractTraversable.map(Traversable.scala:104)
    6 frames
  10. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply
    1. org.apache.spark.streaming.dstream.FileInputDStream.org$apache$spark$streaming$dstream$FileInputDStream$$filesToRDD(FileInputDStream.scala:275)
    2. org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:155)
    3. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    4 frames
  11. Scala
    DynamicVariable.withValue
    1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    1 frame
  12. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1.apply
    1. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    3. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
    4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
    5. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
    5 frames
  13. Scala
    Option.orElse
    1. scala.Option.orElse(Option.scala:289)
    1 frame
  14. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply
    1. org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
    2. org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
    3. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    4 frames
  15. Scala
    DynamicVariable.withValue
    1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    1 frame
  16. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1.apply
    1. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    3. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
    4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
    5. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
    5 frames
  17. Scala
    Option.orElse
    1. scala.Option.orElse(Option.scala:289)
    1 frame
  18. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply
    1. org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
    2. org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
    3. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    4 frames
  19. Scala
    DynamicVariable.withValue
    1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    1 frame
  20. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1.apply
    1. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    3. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
    4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
    5. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
    5 frames
  21. Scala
    Option.orElse
    1. scala.Option.orElse(Option.scala:289)
    1 frame
  22. Spark Project Streaming
    TransformedDStream$$anonfun$6.apply
    1. org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
    2. org.apache.spark.streaming.dstream.TransformedDStream$$anonfun$6.apply(TransformedDStream.scala:42)
    3. org.apache.spark.streaming.dstream.TransformedDStream$$anonfun$6.apply(TransformedDStream.scala:42)
    3 frames
  23. Scala
    List.map
    1. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    2. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    3. scala.collection.immutable.List.foreach(List.scala:381)
    4. scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    5. scala.collection.immutable.List.map(List.scala:285)
    5 frames
  24. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply
    1. org.apache.spark.streaming.dstream.TransformedDStream.compute(TransformedDStream.scala:42)
    2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    3. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
    3 frames
  25. Scala
    DynamicVariable.withValue
    1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    1 frame
  26. Spark Project Streaming
    DStream$$anonfun$getOrCompute$1.apply
    1. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
    3. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
    4. org.apache.spark.streaming.dstream.TransformedDStream.createRDDWithLocalProperties(TransformedDStream.scala:65)
    5. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
    6. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
    6 frames
  27. Scala
    Option.orElse
    1. scala.Option.orElse(Option.scala:289)
    1 frame
  28. Spark Project Streaming
    DStreamGraph$$anonfun$1.apply
    1. org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
    2. org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
    3. org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
    4. org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
    4 frames
  29. Scala
    AbstractTraversable.flatMap
    1. scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    2. scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    3. scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    4. scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    5. scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    6. scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
    6 frames
  30. Spark Project Streaming
    JobGenerator$$anonfun$3.apply
    1. org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
    2. org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:248)
    3. org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:246)
    3 frames
  31. Scala
    Try$.apply
    1. scala.util.Try$.apply(Try.scala:192)
    1 frame
  32. Spark Project Streaming
    JobGenerator$$anon$1.onReceive
    1. org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:246)
    2. org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:182)
    3. org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
    4. org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
    4 frames
  33. Spark
    EventLoop$$anon$1.run
    1. org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    1 frame