Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via spark-user by أنس الليثي, 1 year ago
via nabble.com by Unknown author, 1 year ago
via GitHub by wpoosanguansit
, 1 year ago
This exception has no message.
via Stack Overflow by kareblak
, 2 years ago
via Stack Overflow by Nilotpal
, 1 year ago
This exception has no message.
via Google Groups by no jihun, 1 year ago
This exception has no message.
java.nio.channels.ClosedChannelException: 	at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)	at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)	at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)	at kafka.producer.SyncProducer.send(SyncProducer.scala:119)	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)	at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)	at kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49)	at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:188)	at kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:152)	at kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:151)	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)	at kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:151)	at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:96)	at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:73)	at kafka.producer.Producer.send(Producer.scala:77)	at kafka.javaapi.producer.Producer.send(Producer.scala:33)	at org.css.java.gnipStreaming.GnipSparkStreamer$1$1.call(GnipSparkStreamer.java:59)	at org.css.java.gnipStreaming.GnipSparkStreamer$1$1.call(GnipSparkStreamer.java:51)	at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:225)	at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:225)	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)	at org.apache.spark.scheduler.Task.run(Task.scala:89)