java.io.IOException: Failed to write statements to contact_activity_spark.contact_email_activity_groups.

DataStax JIRA | Alexander Kovalenko | 5 months ago
  1. 0

    I use Cassandra 2.2.5 and spark-cassandra-connector_2.10 version 1.6.0 and Spark 1.6.0 in standalone mode. When my application tries to store date to C*, I see this error in application logs: ERROR [com.datastax.spark.connector.writer.QueryExecutor] Failed to execute: com.datastax.spark.connector.writer.RichBoundStatement@654ac23c com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed) at com.datastax.driver.core.exceptions.WriteFailureException.copy(WriteFailureException.java:91) at com.datastax.driver.core.Responses$Error.asException(Responses.java:126) at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179) at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184) at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43) at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798) at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617) at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005) at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:831) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:346) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) Caused by: com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure during write query at consistency LOCAL_QUORUM (1 responses were required but only 0 replica responded, 1 failed) at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:73) at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:266) at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:246) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ... 11 more 14:05:16,933 ERROR [org.apache.spark.executor.Executor] Exception in task 96.0 in stage 33.0 (TID 197) java.io.IOException: Failed to write statements to contact_activity_spark.contact_email_activity_groups. at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:166) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:134) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:134) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) In /var/log/cassandra/system.log I see following lines: ERROR [SharedPool-Worker-2] 2016-07-20 13:53:37,864 StorageProxy.java:1115 - Failed to apply mutation locally : Mutation of 20887362 bytes is too large for the maxiumum size of 16777216 In /etc/cassandra/cassandra.yaml commitlog_segment_size_in_mb: 32 This is my table I want to write data to cqlsh> DESCRIBE contact_activity_spark.contact_email_activity_groups ; CREATE TABLE contact_activity_spark.contact_email_activity_groups ( run_ts timeuuid, oid int, gt int, contact_ids set<int>, count int, PRIMARY KEY ((run_ts, oid), gt) ) WITH CLUSTERING ORDER BY (gt ASC) AND bloom_filter_fp_chance = 0.01 AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'} AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99.0PERCENTILE'; According to the logs my application wants to save Set of 485748 integers to column contact_ids. This is DataFrame I want to save, here count contains the size of collection in contact_ids column +--------------------+---+---+------+--------------------+ | run_ts|oid| gt| count| contact_ids| +--------------------+---+---+------+--------------------+ |fb6b7800-4e85-11e...| 1| 1| 4|[1463763, 1954941...| |fb6b7800-4e85-11e...| 1| 2| 1| [1853477]| |fb6b7800-4e85-11e...| 1| 3| 2| [1563323, 1563339]| |fb6b7800-4e85-11e...| 1| 5| 2| [1736805, 1736802]| |fb6b7800-4e85-11e...| 1| 6|485748|[1463766, 1463767...| +--------------------+---+---+------+--------------------+

    DataStax JIRA | 5 months ago | Alexander Kovalenko
    java.io.IOException: Failed to write statements to contact_activity_spark.contact_email_activity_groups.
  2. 0

    Failed to write statements

    Stack Overflow | 2 years ago | Amine CHERIFI
    java.io.IOException: Failed to write statements to KeySpace.MyTable. at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:145)
  3. 0

    java.io.IOException: Failed to write statements to keyspacename.tablename

    Stack Overflow | 1 year ago | user3376961
    java.io.IOException: Failed to write statements to keyspacename.tablename.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    SaveToCassandra error

    GitHub | 1 year ago | jadelaop
    java.io.IOException: Failed to write statements to keyspace.table.
  6. 0

    Re: Execption writing on two cassandra tables NoHostAvailableException: All host(s) tried for query failed (no host was tried)

    spark-user | 2 years ago | Antonio Giambanco
    java.io.IOException: Failed to prepare statement INSERT INTO "cassandrasink"."transaction" ("event_id", "isin", "security_type", "security_name", "date", "time", "price", "currency", "user_id", "quantity", "amount", "session_id") VALUES (:"event_id", :"isin", :"security_type", :"security_name", :"date", :"time", :"price", :"currency", :"user_id", :"quantity", :"amount", :"session_id"): All host(s) tried for query failed (no host was tried) at com.datastax.spark.connector.writer.TableWriter.com <http://com.datastax.spark.connector.writer.tablewriter.com/>$ datastax$spark$connector$writer$TableWriter$$prepareStatement(TableWriter. scala:96)

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Failed to write statements to contact_activity_spark.contact_email_activity_groups.

      at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply()
    2. spark-cassandra-connector
      RDDFunctions$$anonfun$saveToCassandra$1.apply
      1. com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:166)
      2. com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:134)
      3. com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)
      4. com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109)
      5. com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139)
      6. com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
      7. com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:134)
      8. com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37)
      9. com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37)
      9 frames
    3. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      2. org.apache.spark.scheduler.Task.run(Task.scala:89)
      3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      3 frames