java.util.NoSuchElementException: next on empty iterator Any idea? This is the stack output: WARN TaskSetManager: Lost task 48.0 in stage 18.0 (TID 605, 192.168.0.1): org.apache.spark.SparkException: Task failed while writing rows.

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Error on increasing number of users
    via by SMART Channel,
  • Next on empty iterator
    via GitHub by tbhuy
    ,
  • Sized sets
    via GitHub by alexarchambault
    ,
    • java.util.NoSuchElementException: next on empty iterator Any idea? This is the stack output: WARN TaskSetManager: Lost task 48.0 in stage 18.0 (TID 605, 192.168.0.1): org.apache.spark.SparkException: Task failed while writing rows. at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:250) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.NoSuchElementException: next on empty iterator at scala.collection.Iterator$$anon$2.next(Iterator.scala:39) at scala.collection.Iterator$$anon$2.next(Iterator.scala:37) at scala.collection.IndexedSeqLike$Elements.next(IndexedSeqLike.scala:64) at com.databricks.spark.avro.AvroOutputWriter$$anonfun$com$databricks$spark$avro$AvroOutputWriter$$createConverterToAvro$7.apply(AvroOutputWriter.scala:141) at com.databricks.spark.avro.AvroOutputWriter.write(AvroOutputWriter.scala:70) at org.apache.spark.sql.sources.OutputWriter.writeInternal(interfaces.scala:380) at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:242) ... 8 more

    Users with the same issue

    Unknown visitor1 times, last one,
    rp
    rp1 times, last one,
    balintn
    balintn1 times, last one,
    silex
    silex3 times, last one,
    Unknown visitor1 times, last one,
    3 more bugmates