org.apache.kafka.connect.errors.ConnectException: Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**

GitHub | a2mehta | 8 months ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    org.apache.hadoop.fs.s3a.S3AFileSystem.append not supported

    GitHub | 8 months ago | a2mehta
    org.apache.kafka.connect.errors.ConnectException: Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**
  2. 0

    Connector can't renew properly Kerberos ticket

    GitHub | 10 months ago | AlexPiermatteo
    org.apache.kafka.connect.errors.ConnectException: java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "myhostname1/IP"; destination host is: "namenode_hostname":8020;
  3. 0

    Exceptions when network is broken

    GitHub | 4 months ago | skyahead
    org.apache.kafka.connect.errors.ConnectException: Cannot acquire lease after timeout, will retry.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.kafka.connect.errors.ConnectException

      Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**

      at io.confluent.connect.hdfs.wal.FSWAL.acquireLease()
    2. io.confluent.connect
      HdfsSinkTask.put
      1. io.confluent.connect.hdfs.wal.FSWAL.acquireLease(FSWAL.java:91)
      2. io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:105)
      3. io.confluent.connect.hdfs.TopicPartitionWriter.applyWAL(TopicPartitionWriter.java:441)
      4. io.confluent.connect.hdfs.TopicPartitionWriter.recover(TopicPartitionWriter.java:197)
      5. io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:227)
      6. io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234)
      7. io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:90)
      7 frames
    3. org.apache.kafka
      ShutdownableThread.run
      1. org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:280)
      2. org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:176)
      3. org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
      4. org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
      5. org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
      5 frames