org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))

Google Groups | Charan Adabala | 1 year ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Couldn't find leaders for Set([TOPICNNAME,0])) When we are uisng in Apache Saprk.

    Google Groups | 1 year ago | Charan Adabala
    org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))
  2. 0

    Couldn't find leaders for Set([TOPICNNAME,0])) When we are uisng in Apache Saprk

    Stack Overflow | 1 year ago | Charan Adabala
    org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))
  3. 0

    Spark Streaming job failing with ArrayBuffer(kafka.common.NotLeaderForPartitionException)

    Stack Overflow | 4 months ago | AKC
    org.apache.spark.SparkException: ArrayBuffer(kafka.common.NotLeaderForPartitionException, org.apache.spark.SparkException: Couldn't find leader offsets for Set([MyTopic,11]))
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark Streaming Application getting failed

    Stack Overflow | 1 year ago | Kaushal
    org.apache.spark.SparkException: ArrayBuffer(kafka.common.NotLeaderForPartitionException, kafka.common.NotLeaderForPartitionException, org.apache.spark.SparkException: Couldn't find leader offsets for Set([Test1,4], [Test2,1],[Test2,3], [Test3,4], [Test4,1], [Test2,3], [Test5,2], [Test2,0], [Test1,5], [Test2,5], [Test2,2]))
  6. 0

    spark运行项目error集锦 - kenandetonghua的专栏 - 博客频道 - CSDN.NET

    csdn.net | 11 months ago
    org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn’t find leaders for Set([rtb-collector-time-v04,0]))

    Root Cause Analysis

    1. org.apache.spark.SparkException

      ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))

      at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.latestLeaderOffsets()
    2. Spark Project External Kafka
      DirectKafkaInputDStream.compute
      1. org.apache.spark.streaming.kafka.DirectKafkaInputDStream.latestLeaderOffsets(DirectKafkaInputDStream.scala:123)
      2. org.apache.spark.streaming.kafka.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:145)
      2 frames
    3. Spark Project Streaming
      DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply
      1. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
      2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
      2 frames
    4. Scala
      DynamicVariable.withValue
      1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
      1 frame
    5. Spark Project Streaming
      DStream$$anonfun$getOrCompute$1.apply
      1. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
      2. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
      3. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
      4. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:344)
      5. org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:342)
      5 frames
    6. Scala
      Option.orElse
      1. scala.Option.orElse(Option.scala:257)
      1 frame
    7. Spark Project Streaming
      DStreamGraph$$anonfun$1.apply
      1. org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:339)
      2. org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:38)
      3. org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:120)
      4. org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:120)
      4 frames
    8. Scala
      AbstractTraversable.flatMap
      1. scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
      2. scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
      3. scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      4. scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      5. scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
      6. scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
      6 frames
    9. Spark Project Streaming
      JobGenerator$$anonfun$2.apply
      1. org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:120)
      2. org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:247)
      3. org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:245)
      3 frames
    10. Scala
      Try$.apply
      1. scala.util.Try$.apply(Try.scala:161)
      1 frame
    11. Spark Project Streaming
      JobGenerator$$anon$1.onReceive
      1. org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:245)
      2. org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:181)
      3. org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
      4. org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:86)
      4 frames
    12. Spark
      EventLoop$$anon$1.run
      1. org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
      1 frame