Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by Quentin
, 1 year ago
via Google Groups by Unknown author, 1 year ago
java.nio.channels.ClosedByInterruptException: null at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)[na:1.8.0_101] at sun.nio.ch.FileChannelImpl.truncate(FileChannelImpl.java:372)[na:1.8.0_101] at org.apache.spark.storage.DiskBlockObjectWriter.revertPartialWritesAndClose(DiskBlockObjectWriter.scala:164)[spark-core_2.10-1.6.2.1.jar:1.6.2.1] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.stop(BypassMergeSortShuffleWriter.java:226)[spark-core_2.10-1.6.2.1.jar:1.6.2.1] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)[spark-core_2.10-1.6.2.1.jar:1.6.2.1] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)[spark-core_2.10-1.6.2.1.jar:1.6.2.1] at org.apache.spark.scheduler.Task.run(Task.scala:89)[spark-core_2.10-1.6.2.1.jar:1.6.2.1] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)[spark-core_2.10-1.6.2.1.jar:1.6.2.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[na:1.8.0_101] at java.lang.Thread.run(Thread.java:745)[na:1.8.0_101]