Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via JIRA by Avkash Chauhan, 1 year ago
via Stack Overflow by Saulo Ricci
, 2 years ago
SparkContext has been shutdownacross some threads, because in that point I guess YARN resource manager already finished the resources. This is what I get in my spark application log: java.lang.IllegalStateException: SparkContext has been shutdown
via GitHub by fromradio
, 1 year ago
SparkContext has been shutdown
via GitHub by geoHeil
, 1 year ago
SparkContext has been shutdown
via gitbooks.io by Unknown author, 1 year ago
SparkContext has been shutdown
java.lang.IllegalStateException: SparkContext has been shutdown	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1856)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1869)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1940)	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)	at org.apache.spark.rdd.RDD.collect(RDD.scala:926)	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:166)	at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)	at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)	at org.apache.spark.sql.DataFrame$$anonfun$count$1.apply(DataFrame.scala:1515)	at org.apache.spark.sql.DataFrame$$anonfun$count$1.apply(DataFrame.scala:1514)	at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)	at org.apache.spark.sql.DataFrame.count(DataFrame.scala:1514)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:498)	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)	at py4j.Gateway.invoke(Gateway.java:259)	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)	at py4j.commands.CallCommand.execute(CallCommand.java:79)	at py4j.GatewayConnection.run(GatewayConnection.java:209)	at java.lang.Thread.run(Thread.java:745)