org.apache.kafka.connect.errors.ConnectException: Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**

GitHub | a2mehta | 5 months ago
  1. 0

    org.apache.hadoop.fs.s3a.S3AFileSystem.append not supported

    GitHub | 5 months ago | a2mehta
    org.apache.kafka.connect.errors.ConnectException: Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**
  2. 0

    Connector can't renew properly Kerberos ticket

    GitHub | 7 months ago | AlexPiermatteo
    org.apache.kafka.connect.errors.ConnectException: java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "myhostname1/IP"; destination host is: "namenode_hostname":8020;
  3. 0

    Exceptions when network is broken

    GitHub | 2 months ago | skyahead
    org.apache.kafka.connect.errors.ConnectException: Cannot acquire lease after timeout, will retry.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Copying connect-mongodb-1.0.jar vs. connect-mongodb-1.0-jar-with-dependencies.jar to $CONFLUENT_HOME/share/java/confluent-common - different issues

    GitHub | 4 months ago | camelia-c
    org.apache.kafka.connect.errors.ConnectException: Task not found: mongodb-sink-connector-0
  6. 0

    GitHub comment 142#255692272

    GitHub | 2 months ago | lakeofsand
    org.apache.kafka.connect.errors.ConnectException: Error closing hdfs://192.168.101.55:8020/logs/*****/1/log

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.kafka.connect.errors.ConnectException

      Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**

      at io.confluent.connect.hdfs.wal.FSWAL.acquireLease()
    2. io.confluent.connect
      HdfsSinkTask.put
      1. io.confluent.connect.hdfs.wal.FSWAL.acquireLease(FSWAL.java:91)
      2. io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:105)
      3. io.confluent.connect.hdfs.TopicPartitionWriter.applyWAL(TopicPartitionWriter.java:441)
      4. io.confluent.connect.hdfs.TopicPartitionWriter.recover(TopicPartitionWriter.java:197)
      5. io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:227)
      6. io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234)
      7. io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:90)
      7 frames
    3. org.apache.kafka
      ShutdownableThread.run
      1. org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:280)
      2. org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:176)
      3. org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)
      4. org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)
      5. org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)
      5 frames