org.apache.kafka.connect.errors.ConnectException

Error closing hdfs://VM03cent7:8020/tmp/TRAFFIC_DATA_P/0/log

Samebug tips0

There are no available Samebug tips for this exception. If you know how to solve this issue, help other users by writing a short tip.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web21

  • via Google Groups by Shannon Ma, 9 months ago
    Error closing hdfs://VM03cent7:8020/tmp/TRAFFIC_DATA_P/0/log
  • via GitHub by lakeofsand
    , 8 months ago
    Error closing hdfs://192.168.101.55:8020/logs/*****/1/log
  • via GitHub by krisskross
    , 10 months ago
    Error creating writer for log file hdfs://hadoop-master03/tmp/kafka-connect/logs/actions_order/89/log
  • Stack trace

    • org.apache.kafka.connect.errors.ConnectException: Error closing hdfs://VM03cent7:8020/tmp/TRAFFIC_DATA_P/0/log at io.confluent.connect.hdfs.wal.FSWAL.close(FSWAL.java:156) at io.confluent.connect.hdfs.TopicPartitionWriter.close(TopicPartitionWriter.java:326) at io.confluent.connect.hdfs.DataWriter.close(DataWriter.java:296) at io.confluent.connect.hdfs.HdfsSinkTask.close(HdfsSinkTask.java:109) at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:301) at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:432) at org.apache.kafka.connect.runtime.WorkerSinkTask.access$1100(WorkerSinkTask.java:54) at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsRevoked(WorkerSinkTask.java:476) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinPrepare(ConsumerCoordinator.java:283) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:212) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937) at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:316) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:222) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.200.5.162:50010], original=[10.200.5.162:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:594)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    Unknown user
    Once, 1 year ago
    Unknown user
    Once, 11 months ago
    Unknown user
    Once, 1 year ago
    Unknown user
    Once, 1 year ago
    Unknown user
    2 times, 1 year ago
    1 more bugmates