com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed)

DataStax JIRA | Alexander Kovalenko | 5 months ago
  1. 0

    I use Cassandra 2.2.5 and spark-cassandra-connector_2.10 version 1.6.0 and Spark 1.6.0 in standalone mode. When my application tries to store date to C*, I see this error in application logs: ERROR [com.datastax.spark.connector.writer.QueryExecutor] Failed to execute: com.datastax.spark.connector.writer.RichBoundStatement@654ac23c com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed) at com.datastax.driver.core.exceptions.WriteFailureException.copy(WriteFailureException.java:91) at com.datastax.driver.core.Responses$Error.asException(Responses.java:126) at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179) at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184) at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43) at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798) at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617) at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005) at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:831) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:346) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed) at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:73) at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:266) at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:246) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ... 11 more 14:05:16,933 ERROR [org.apache.spark.executor.Executor] Exception in task 96.0 in stage 33.0 (TID 197) java.io.IOException: Failed to write statements to contact_activity_spark.contact_email_activity_groups. at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:166) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:134) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:134) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) In /var/log/cassandra/system.log I see following lines: ERROR [SharedPool-Worker-2] 2016-07-20 13:53:37,864 StorageProxy.java:1115 - Failed to apply mutation locally : Mutation of 20887362 bytes is too large for the maxiumum size of 16777216 In /etc/cassandra/cassandra.yaml commitlog_segment_size_in_mb: 32 This is my table I want to write data to cqlsh> DESCRIBE contact_activity_spark.contact_email_activity_groups ; CREATE TABLE contact_activity_spark.contact_email_activity_groups ( run_ts timeuuid, oid int, gt int, contact_ids set<int>, count int, PRIMARY KEY ((run_ts, oid), gt) ) WITH CLUSTERING ORDER BY (gt ASC) AND bloom_filter_fp_chance = 0.01 AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'} AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99.0PERCENTILE'; According to the logs my application wants to save Set of 485748 integers to column contact_ids. This is DataFrame I want to save, here count contains the size of collection in contact_ids column +--------------------+---+---+------+--------------------+ | run_ts|oid| gt| count| contact_ids| +--------------------+---+---+------+--------------------+ |fb6b7800-4e85-11e...| 1| 1| 4|[1463763, 1954941...| |fb6b7800-4e85-11e...| 1| 2| 1| [1853477]| |fb6b7800-4e85-11e...| 1| 3| 2| [1563323, 1563339]| |fb6b7800-4e85-11e...| 1| 5| 2| [1736805, 1736802]| |fb6b7800-4e85-11e...| 1| 6|485748|[1463766, 1463767...| +--------------------+---+---+------+--------------------+

    DataStax JIRA | 5 months ago | Alexander Kovalenko
    com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed)
  2. 0

    Cassandra failure during write query at consistency LOCAL_QUORUM

    Stack Overflow | 6 months ago | Raghavan
    com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (2 responses were required but only 0 replica responded, 1 failed)

    Root Cause Analysis

    1. com.datastax.driver.core.exceptions.WriteFailureException

      Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed)

      at com.datastax.driver.core.Responses$Error$1.decode()
    2. DataStax Java Driver for Apache Cassandra - Core
      Message$ProtocolDecoder.decode
      1. com.datastax.driver.core.Responses$Error$1.decode(Responses.java:73)
      2. com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
      3. com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:266)
      4. com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:246)
      4 frames
    3. Netty
      SingleThreadEventExecutor$2.run
      1. io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
      2. io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
      3. io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
      4. io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
      5. io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
      6. io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
      7. io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
      8. io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:831)
      9. io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:346)
      10. io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254)
      11. io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
      11 frames
    4. Java RT
      Thread.run
      1. java.lang.Thread.run(Thread.java:745)
      1 frame