Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by Alex Stanovsky
, 1 year ago
Status Code: 0, AWS Service: null, AWS Request ID: null, AWS Error Code: null, AWS Error Message: One or more objects could not be deleted, S3 Extended Request ID: null
via GitHub by vagnerclementino
, 7 months ago
One or more objects could not be deleted (Service: null; Status Code: 200; Error Code: null; Request ID: BLANKED1), S3 Extended Request ID: BLANKED2=
via Stack Overflow by Darth.Vader
, 1 week ago
Status Code: 0, AWS Service: null, AWS Request ID: null, AWS Error Code: null, AWS Error Message: One or more objects could not be deleted, S3 Extended Request ID: null
com.amazonaws.services.s3.model.MultiObjectDeleteException: Status Code: 0, AWS Service: null, AWS Request ID: null, AWS Error Code: null, AWS Error Message: One or more objects could not be deleted, S3 Extended Request ID: null	at com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1745)	at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:687)	at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(FileOutputCommitter.java:381)	at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:314)	at org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:230)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)	at org.apache.spark.sql.DataFrameWriter.orc(DataFrameWriter.scala:346)	at com.mgmg.memengine.stream.app.persistentEventStreamBootstrap$$anonfun$setupSsc$3.apply(persistentEventStreamBootstrap.scala:122)	at com.mgmg.memengine.stream.app.persistentEventStreamBootstrap$$anonfun$setupSsc$3.apply(persistentEventStreamBootstrap.scala:112)	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)	at scala.util.Try$.apply(Try.scala:161)