Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by vikeshkhanna
, 1 year ago
Error creating writer for log file hdfs://bikeshed//kafka_connect/logs/goscribe.mp-hash_events/48/log
via GitHub by a2mehta
, 1 year ago
Error creating writer for log file s3a://s3 bucket/streamx/logs/streamx/0/log**
via GitHub by skyahead
, 1 year ago
Error creating writer for log file hdfs://allinone2:54310/tianjil/logs/xxxx/3/log
via GitHub by Perdjesk
, 8 months ago
Error creating writer for log file hdfs://hadoop//srv/prod/blu/connect-data/logs/topicname/6/log
via Google Groups by Basti, 1 year ago
Error creating writer >> for log file hdfs://10.42.0.86:9000/logs/testApp/0/log
via GitHub by krisskross
, 1 year ago
Error creating writer for log file hdfs://hadoop-master03/tmp/kafka-connect/logs/actions_order/89/log
java.io.EOFException: 	at java.io.DataInputStream.readFully(DataInputStream.java:197)	at java.io.DataInputStream.readFully(DataInputStream.java:169)	at io.confluent.connect.hdfs.wal.WALFile$Reader.init(WALFile.java:590)	at io.confluent.connect.hdfs.wal.WALFile$Reader.initialize(WALFile.java:558)	at io.confluent.connect.hdfs.wal.WALFile$Reader.(WALFile.java:535)	at io.confluent.connect.hdfs.wal.WALFile$Writer.(WALFile.java:214)	at io.confluent.connect.hdfs.wal.WALFile.createWriter(WALFile.java:67)	at io.confluent.connect.hdfs.wal.FSWAL.acquireLease(FSWAL.java:73)	at io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:105)	at io.confluent.connect.hdfs.TopicPartitionWriter.applyWAL(TopicPartitionWriter.java:448)	at io.confluent.connect.hdfs.TopicPartitionWriter.recover(TopicPartitionWriter.java:204)	at io.confluent.connect.hdfs.DataWriter.recover(DataWriter.java:236)	at io.confluent.connect.hdfs.DataWriter.onPartitionsAssigned(DataWriter.java:299)	at io.confluent.connect.hdfs.HdfsSinkTask.onPartitionsAssigned(HdfsSinkTask.java:103)	at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:362)	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:194)	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:225)	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)	at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:171)	at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90)	at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58)	at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)