java.io.EOFException: Premature EOF: no length prefix available

Spring JIRA | Thomas Risberg | 3 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    regionserver crash after node decomission

    Google Groups | 3 years ago | Ian Brooks
    java.io.EOFException: Premature EOF: no length prefix available
  2. 0

    HBase logging paused for a long time then RS crashed

    Google Groups | 3 years ago | Tao Xiao
    java.io.EOFException: Premature EOF: no length prefix available
  3. 0

    Terasort fails on HDP2.0 - Hortonworks

    hortonworks.com | 2 years ago
    java.io.EOFException: Premature EOF: no length prefix available
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Hadoop HBase user's mailing list ()

    gmane.org | 1 year ago
    java.io.EOFException: Premature EOF: no length prefix available
  6. 0

    Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}

    Spring JIRA | 3 years ago | Thomas Risberg
    java.io.EOFException: Premature EOF: no length prefix available

    5 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.EOFException

      Premature EOF: no length prefix available

      at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed()
    2. Apache Hadoop HDFS
      DFSOutputStream$DataStreamer$ResponseProcessor.run
      1. org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
      2. org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116)
      3. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721)
      3 frames