java.io.IOException

replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1075552574_1829939, replica=ReplicaWaitingToBeRecovered, blk_1075552574_1828818, RWR getNumBytes() = 7103 getBytesOnDisk() = 7103 getVisibleLength()= -1 getVolume() = /var/data/hadoop/hdfs/dn/current getBlockFile() = /var/data/hadoop/hdfs/dn/current/BP-133353882-127.0.1.1-1438188921629/current/rbw/blk_1075552574 unlinked=false

Samebug tips0

There are no available Samebug tips for this exception. If you know how to solve this issue, help other users by writing a short tip.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web6293

  • () = /var/data/hadoop/hdfs/dn/current getBlockFile() = /var/data/hadoop/hdfs/dn/current/BP-133353882-127.0.1.1-1438188921629/current/rbw/blk_1075552574 unlinked=false
  • replica.getGenerationStamp() &lt; block.getGenerationStamp(), block=blk_1073757987_17249, replica=ReplicaWaitingToBeRecovered, blk_1073757987_17179, RWR getNumBytes() = 81838954 getBytesOnDisk() = 81838954 getVisibleLength()= -1
  • via softentropy.com by Unknown author, 1 year ago
    Corrupted block: ReplicaBeingWritten, blk_1073741859_7815, RBW getNumBytes() = 910951 getBytesOnDisk() = 910951 getVisibleLength()= 910951 getVolume() = /tmp/hadoop-root/dfs/data/current getBlockFile() = /tmp/hadoop-root/dfs/data/current/BP-953099033-10.0.0.17-1409838183920/current/rbw/blk_1073741859 bytesAcked=910951 bytesOnDisk=910951
  • Stack trace

    • java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1075552574_1829939, replica=ReplicaWaitingToBeRecovered, blk_1075552574_1828818, RWR getNumBytes() = 7103 getBytesOnDisk() = 7103 getVisibleLength()= -1 getVolume() = /var/data/hadoop/hdfs/dn/current getBlockFile() = /var/data/hadoop/hdfs/dn/current/BP-133353882-127.0.1.1-1438188921629/current/rbw/blk_1075552574 unlinked=false at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2288) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2254) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2537) at org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2548) at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2620) at org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:243) at org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2522) at java.lang.Thread.run(Thread.java:745)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    You’re the first here who have seen this exception.