org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152.

DataStax JIRA | Andrew Kostousov | 2 years ago
  1. 0

    Recently we encountered the following situation in production. Client tool performing reading data from some column family failed with System.TimeoutException at the time when there was no any significant load on the cluster. We were able to reproduce the error several times running this same tool on different machines at different times. Client error was: {code} INFO 2014-11-12 17:26:50,217 System.TimeoutException: The task didn't complete before timeout. at Cassandra.TaskHelper.WaitToComplete[T](Task`1 task, Int32 timeout) in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\TaskHelper.cs:line 137 at Cassandra.RequestHandler`1.<FillRowSet>b__0(Byte[] pagingState) in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RequestHandler.cs:line 110 at Cassandra.RowSet.PageNext() in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RowPopulators\RowSet.cs:line 186 at Cassandra.RowSet.IsExhausted() in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RowPopulators\RowSet.cs:line 80 at Cassandra.RowSet.<GetEnumerator>d__1.MoveNext() in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RowPopulators\RowSet.cs:line 145 {code} At the time of the client error there was an error logged by one of cassandra nodes in the cluster: {code} ERROR [ReadStage:24022] 2014-11-12 17:26:39,981 CassandraDaemon.java (line 217) Exception in thread Thread[ReadStage:24022,5,main] org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152. at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:91) at org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:326) at java.io.RandomAccessFile.readFully(RandomAccessFile.java:444) at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424) at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:351) at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392) at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355) at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124) at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85) at org.apache.cassandra.db.Column$1.computeNext(Column.java:75) at org.apache.cassandra.db.Column$1.computeNext(Column.java:64) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:129) at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157) at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140) at org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185) at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122) at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80) at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101) at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75) at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115) at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1594) at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1590) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1750) at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709) at org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119) at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152. at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:122) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87) ... 39 more Caused by: java.io.IOException: net.jpountz.lz4.LZ4Exception: Error decoding offset 19143 of input buffer at org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:89) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:118) ... 40 more Caused by: net.jpountz.lz4.LZ4Exception: Error decoding offset 19143 of input buffer at net.jpountz.lz4.LZ4JNIFastDecompressor.decompress(LZ4JNIFastDecompressor.java:33) at org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:84) ... 41 more {code} Client was performing read with CL=QUORUM and target data is stored with RF=3. Client was configured the following way: {code} cluster = Cluster.Builder() .AddContactPoints(settings.ContactEndPoints) .WithSocketOptions(new SocketOptions().SetTcpNoDelay(true)) .WithDefaultKeyspace(keyspaceName) .WithQueryTimeout(10000) // <- 10 sec client timeout .WithCompression(CompressionType.LZ4) .WithRetryPolicy(new DefaultRetryPolicy()) .WithLoadBalancingPolicy(new TokenAwarePolicy(new RoundRobinPolicy())) .WithReconnectionPolicy(new ExponentialReconnectionPolicy((long)TimeSpan.FromSeconds(1).TotalMilliseconds, (long)TimeSpan.FromMinutes(10).TotalMilliseconds)) .Build(); {code} Note that TokenAwarePolicy did not have any effect since we do not set routing keys on any statements yet. After we repaired that node using sstablescrub tool and nodetool repair command the error has gone. That is why I suspect that there is some interconnection between client timeout error in the driver and corrupted SSTable on server. Server Environment: Cassandra ReleaseVersion: 2.0.11 CentOS Linux release 7.0.1406 (Core) OpenJDK Runtime Environment (rhel-2.5.3.1.el7_0-x86_64 u71-b14) OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode) Client Environment: Windows Server 2012 R2 Standard x64 Cassandra C# driver built from git revision 930b8d04fbd7ac46a188ca2d596c55f6ed3a4318 (Fix keyspace race bug CSHARP-175)

    DataStax JIRA | 2 years ago | Andrew Kostousov
    org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152.
  2. 0

    Recently we encountered the following situation in production. Client tool performing reading data from some column family failed with System.TimeoutException at the time when there was no any significant load on the cluster. We were able to reproduce the error several times running this same tool on different machines at different times. Client error was: {code} INFO 2014-11-12 17:26:50,217 System.TimeoutException: The task didn't complete before timeout. at Cassandra.TaskHelper.WaitToComplete[T](Task`1 task, Int32 timeout) in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\TaskHelper.cs:line 137 at Cassandra.RequestHandler`1.<FillRowSet>b__0(Byte[] pagingState) in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RequestHandler.cs:line 110 at Cassandra.RowSet.PageNext() in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RowPopulators\RowSet.cs:line 186 at Cassandra.RowSet.IsExhausted() in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RowPopulators\RowSet.cs:line 80 at Cassandra.RowSet.<GetEnumerator>d__1.MoveNext() in c:\projects\diadoc\dev\external\datastax-csharp-driver\src\Cassandra\RowPopulators\RowSet.cs:line 145 {code} At the time of the client error there was an error logged by one of cassandra nodes in the cluster: {code} ERROR [ReadStage:24022] 2014-11-12 17:26:39,981 CassandraDaemon.java (line 217) Exception in thread Thread[ReadStage:24022,5,main] org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152. at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:91) at org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:326) at java.io.RandomAccessFile.readFully(RandomAccessFile.java:444) at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424) at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:351) at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392) at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355) at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124) at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85) at org.apache.cassandra.db.Column$1.computeNext(Column.java:75) at org.apache.cassandra.db.Column$1.computeNext(Column.java:64) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:129) at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157) at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140) at org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185) at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122) at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80) at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101) at org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75) at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115) at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1594) at org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1590) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1750) at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709) at org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119) at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152. at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:122) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87) ... 39 more Caused by: java.io.IOException: net.jpountz.lz4.LZ4Exception: Error decoding offset 19143 of input buffer at org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:89) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:118) ... 40 more Caused by: net.jpountz.lz4.LZ4Exception: Error decoding offset 19143 of input buffer at net.jpountz.lz4.LZ4JNIFastDecompressor.decompress(LZ4JNIFastDecompressor.java:33) at org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:84) ... 41 more {code} Client was performing read with CL=QUORUM and target data is stored with RF=3. Client was configured the following way: {code} cluster = Cluster.Builder() .AddContactPoints(settings.ContactEndPoints) .WithSocketOptions(new SocketOptions().SetTcpNoDelay(true)) .WithDefaultKeyspace(keyspaceName) .WithQueryTimeout(10000) // <- 10 sec client timeout .WithCompression(CompressionType.LZ4) .WithRetryPolicy(new DefaultRetryPolicy()) .WithLoadBalancingPolicy(new TokenAwarePolicy(new RoundRobinPolicy())) .WithReconnectionPolicy(new ExponentialReconnectionPolicy((long)TimeSpan.FromSeconds(1).TotalMilliseconds, (long)TimeSpan.FromMinutes(10).TotalMilliseconds)) .Build(); {code} Note that TokenAwarePolicy did not have any effect since we do not set routing keys on any statements yet. After we repaired that node using sstablescrub tool and nodetool repair command the error has gone. That is why I suspect that there is some interconnection between client timeout error in the driver and corrupted SSTable on server. Server Environment: Cassandra ReleaseVersion: 2.0.11 CentOS Linux release 7.0.1406 (Core) OpenJDK Runtime Environment (rhel-2.5.3.1.el7_0-x86_64 u71-b14) OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode) Client Environment: Windows Server 2012 R2 Standard x64 Cassandra C# driver built from git revision 930b8d04fbd7ac46a188ca2d596c55f6ed3a4318 (Fix keyspace race bug CSHARP-175)

    DataStax JIRA | 2 years ago | Andrew Kostousov
    org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.io.compress.CorruptBlockException: (/ssd/cassandra/data/diadoc_letter_meta/metas_by_letter/diadoc_letter_meta-metas_by_letter-jb-3662-Data.db): corruption detected, chunk at 41506107 of length 31152.
  3. 0

    Cassandra 2.2 - Streaming errors at repair/bootstrap/decommission

    Server Fault | 4 months ago | Greg M.
    java.io.IOException: net.jpountz.lz4.LZ4Exception: Error decoding offset 33063 of input buffer
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark job failing in YARN mode

    Stack Overflow | 1 month ago | Paul Trehiou
    java.io.IOException: Stream is corrupted
  6. 0

    Fail to decompress a stream

    GitHub | 1 month ago | davies
    java.io.IOException: Stream is corrupted

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. net.jpountz.lz4.LZ4Exception

      Error decoding offset 19143 of input buffer

      at net.jpountz.lz4.LZ4JNIFastDecompressor.decompress()
    2. LZ4 and xxHash
      LZ4JNIFastDecompressor.decompress
      1. net.jpountz.lz4.LZ4JNIFastDecompressor.decompress(LZ4JNIFastDecompressor.java:33)
      1 frame
    3. org.apache.cassandra
      RandomAccessReader.read
      1. org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:84)
      2. org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:118)
      3. org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
      4. org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:326)
      4 frames
    4. Java RT
      RandomAccessFile.readFully
      1. java.io.RandomAccessFile.readFully(RandomAccessFile.java:444)
      2. java.io.RandomAccessFile.readFully(RandomAccessFile.java:424)
      2 frames
    5. org.apache.cassandra
      RandomAccessReader.readBytes
      1. org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:351)
      1 frame
    6. Apache Cassandra
      ByteBufferUtil.readWithLength
      1. org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
      2. org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
      2 frames
    7. org.apache.cassandra
      Column$1.computeNext
      1. org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
      2. org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
      3. org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
      4. org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
      4 frames
    8. Guava
      AbstractIterator.hasNext
      1. com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
      2. com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
      2 frames
    9. org.apache.cassandra
      QueryFilter$2.hasNext
      1. org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:129)
      2. org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
      3. org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
      3 frames
    10. Apache Cassandra
      MergeIterator$OneToOne.computeNext
      1. org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
      1 frame
    11. Guava
      AbstractIterator.hasNext
      1. com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
      2. com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
      2 frames
    12. org.apache.cassandra
      RowIteratorFactory$2.getReduced
      1. org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
      2. org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
      3. org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
      4. org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
      5. org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
      5 frames
    13. Apache Cassandra
      MergeIterator$ManyToOne.computeNext
      1. org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
      2. org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
      2 frames
    14. Guava
      AbstractIterator.hasNext
      1. com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
      2. com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
      2 frames
    15. org.apache.cassandra
      ColumnFamilyStore$9.computeNext
      1. org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1594)
      2. org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1590)
      2 frames
    16. Guava
      AbstractIterator.hasNext
      1. com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
      2. com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
      2 frames
    17. org.apache.cassandra
      MessageDeliveryTask.run
      1. org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1750)
      2. org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709)
      3. org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
      4. org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
      5. org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
      5 frames
    18. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames