java.io.IOException: FileAlreadyExistException(message:Block file is being written! userId(##) blockId(####))

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via JIRA by Calvin Jia, 1 year ago
FileAlreadyExistException(message:Block file is being written! userId(##) blockId(####))
via JIRA by Calvin Jia, 1 year ago
FileAlreadyExistException(message:Block file is being written! userId(##) blockId(####))
via Google Groups by max, 1 year ago
OutOfSpaceException(message:Failed to allocate space for block! blockId(22684943515648) sizeBytes(8388608))
via Google Groups by Sam Stoelinga, 5 months ago
TachyonTException(type:BLOCK_ALREADY_EXISTS, message:Temp blockId 16,777,216 is not available, because it already exists)
via JIRA by cheng chang, 1 year ago
TachyonTException(type:BLOCK_ALREADY_EXISTS, message:Temp blockId 1,275,068,416 is not available, because it is already committed)
via JIRA by cheng chang, 1 year ago
TachyonTException(type:BLOCK_ALREADY_EXISTS, message:Temp blockId 1,275,068,416 is not available, because it is already committed)
java.io.IOException: FileAlreadyExistException(message:Block file is being written! userId(##) blockId(####))
at tachyon.worker.WorkerClient.requestBlockLocation(WorkerClient.java:378)
at tachyon.client.TachyonFS.getLocalBlockTemporaryPath(TachyonFS.java:633)
at tachyon.client.BlockOutStream.(BlockOutStream.java:96)
at tachyon.client.BlockOutStream.(BlockOutStream.java:65)
at tachyon.client.RemoteBlockInStream.(RemoteBlockInStream.java:128)
at tachyon.client.BlockInStream.get(BlockInStream.java:62)
at tachyon.client.FileInStream.seek(FileInStream.java:157)
at tachyon.hadoop.HdfsFileInputStream.seek(HdfsFileInputStream.java:244)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:48)
at org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:103)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:236)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Users with the same issue

You are the first who have seen this exception. Write a tip to help other users and build your expert profile.

Know the solutions? Share your knowledge to help other developers to debug faster.