java.lang.reflect.InvocationTargetException

This exception has no message.

Samebug tips0

There are no available Samebug tips for this exception. If you know how to solve this issue, help other users by writing a short tip.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web23838

  • via Terracotta by johannr, 11 months ago
    This exception has no message.
  • via Terracotta by pgovindraj, 11 months ago
    This exception has no message.
  • via Terracotta by bradleyw, 11 months ago
    This exception has no message.
  • Stack trace

    • java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.zu.flume.sink.hdfssink.AbstractHDFSWriter.getNumCurrentReplicas(Unknown Source) at com.zu.flume.sink.hdfssink.AbstractHDFSWriter.isUnderReplicated(Unknown Source) at com.zu.flume.sink.hdfssink.BucketWriter.shouldRotate(Unknown Source) at com.zu.flume.sink.hdfssink.BucketWriter.append(Unknown Source) at com.zu.flume.sink.hdfssink.HDFSEventSink.process(Unknown Source) at org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:182) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.6.25.184:50010,DS-a67e1753-7160-4c87-8a87-330732f6ac30,DISK], DatanodeInfoWithStorage[10.6.25.189:50010,DS-f42cdbb3-7981-4630-8b98-0ac04bdf92a2,DISK]], original=[DatanodeInfoWithStorage[10.6.25.184:50010,DS-a67e1753-7160-4c87-8a87-330732f6ac30,DISK], DatanodeInfoWithStorage[10.6.25.189:50010,DS-f42cdbb3-7981-4630-8b98-0ac04bdf92a2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1162) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1228) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1375) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1119) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:622)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    Unknown user
    Once, 11 months ago
    Unknown user
    Once, 1 year ago
    Unknown user
    Once, 1 year ago
    Unknown user
    Once, 1 year ago
    Unknown user
    2 times, 1 year ago
    1 more bugmates