Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by Krishnaprasad, 1 year ago
Failed to cache: Unable to request space from worker
via Google Groups by Zaicheng Wang, 1 year ago
Failed to cache: Not enough space left on worker ip-10-10-48-40.ec2.internal/10.10.48.40:29998 to store blockId 3808428037. Please consult http://www.alluxio.org/docs/1.3/en/Debugging-Guide.html for common solutions to address this problem.
via Google Groups by Tim B, 1 year ago
Failed to cache: Unable to request space from worker
via Google Groups by test520, 1 year ago
Failed to cache: alluxio.exception. BlockAlreadyExistsException: Temp blockId 16,777,216 is not available, because it is already committed
via Google Groups by Amran Chen, 1 year ago
Failed to cache: alluxio.exception. BlockAlreadyExistsException: Temp blockId 33,554,432 is not available, because it is already committed
via Google Groups by Kaiming Wan, 1 year ago
Failed to cache: /home/alluxio/ramdisk/alluxioworker/.tmp_blocks/678/5bbebd62959576a6-c000000 (Permission denied)
java.io.IOException: Unable to request space from worker	at alluxio.client.block.LocalBlockOutStream.requestSpace(LocalBlockOutStream.java:137)	at alluxio.client.block.LocalBlockOutStream.flush(LocalBlockOutStream.java:114)	at alluxio.client.block.BufferedBlockOutStream.write(BufferedBlockOutStream.java:104)	at alluxio.client.file.FileOutStream.write(FileOutStream.java:284)	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)	at java.io.DataOutputStream.write(DataOutputStream.java:107)	at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:83)	at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:98)	at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)	at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)	at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)	at com.flytxt.bigdata.mr.WordCount$IntSumReducer.reduce(WordCount.java:101)	at com.flytxt.bigdata.mr.WordCount$IntSumReducer.reduce(WordCount.java:90)	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:422)	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)