org.apache.spark.SparkException

Job aborted due to stage failure: Task 1 in stage 744.0 failed 1 times, most recent failure: Lost task 1.0 in stage 744.0 (TID 1237, localhost): java.lang.Exception: Partition[1]: FATAL ERROR for job S2V_job5364016263210767025. Job status information is available in the Vertica table public.S2V_JOB_STATUS. . Failed rows summary: FailedRowsPercent=1.0; failedRowsPercentTolerance=0.0: FAILED. NOT OK to commit rows to database. Too many rows were rejected. . Unable to create/insert into target table public.sometable

Samebug tips0

We couldn't find tips for this exception.

Don't give up yet. Paste your full stack trace to get a solution.

Stack trace

  • org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 744.0 failed 1 times, most recent failure: Lost task 1.0 in stage 744.0 (TID 1237, localhost): java.lang.Exception: Partition[1]: FATAL ERROR for job S2V_job5364016263210767025. Job status information is available in the Vertica table public.S2V_JOB_STATUS. . Failed rows summary: FailedRowsPercent=1.0; failedRowsPercentTolerance=0.0: FAILED. NOT OK to commit rows to database. Too many rows were rejected. . Unable to create/insert into target table public.sometable at com.vertica.spark.s2v.S2V.tryTofinalizeSaveToVertica(S2V.scala:746) at com.vertica.spark.s2v.S2V$$anonfun$2.apply(S2V.scala:226) at com.vertica.spark.s2v.S2V$$anonfun$2.apply(S2V.scala:128) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745) at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

Write tip

You have a different solution? A short tip here would help you and many other users who saw this issue last week.

Users with the same issue

We couldn't find other users who have seen this exception.