java.lang.reflect.InvocationTargetException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Hadoop indexer path issue
    via by Unknown author,
  • Hey, I've seen that this error has been around for some time and I hope that this description is complete and will be helpful in reproducing and fixing. System: a logback.xml with a file appender with prudent=true. the log path should be to a volume with little available space. Scenario: start writing to the log file. as soon as the space is depleted, errors start happening: 15:44:40,595 |-ERROR in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - IO failure while writing to file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] java.io.IOException: No space left on device at java.io.IOException: No space left on device at at java.io.FileOutputStream.writeBytes(Native Method) at at java.io.FileOutputStream.write(FileOutputStream.java:345) at at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at at ch.qos.logback.core.recovery.ResilientOutputStreamBase.flush(ResilientOutputStreamBase.java:79) [...] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Attempting to recover from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Recovered from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - IO failure in appender java.nio.channels.ClosedChannelException at java.nio.channels.ClosedChannelException at at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58) and then: 15:44:51,069 |-WARN in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - Attempted to append to non started appender [MY_LOG]. Debugging: i have investigated this issue and found the culprit to be line 204 in FileAppender: finally { if (fileLock != null) { ---> fileLock.release(); } [...] the problem is that when the original IOException was thrown, the channel was closed as part of the attemptRecovery method in ResilientOutputStreamBase. the release will throw a ClosedChannelException if the file channel is closed. the appender is then set to started=false in OutputStreamAppender subAppend Method and stays this way until restarted. Fix suggestion: the easy fix here is changing the guard of the release: finally { if (fileLock != null && fileChannel.isOpen()) { fileLock.release(); } [...] this prevents the release from throwing the exception. for now, an easy mitigation (if possible) is to set prudent=false. hope this helps and the bug will be fixed.
    via by Nadav Wexler,
  • I ran a build in debug mode ({{mvn -X}}) and noticed the following bug (repeatedly): {noformat} [DEBUG] Error releasing shared lock for resolution tracking file: C:\Users\awhitford\.m2\repository\org\apache\maven\plugins\maven-war-plugin\resolver-status.properties java.nio.channels.ClosedChannelException at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58) at org.apache.maven.repository.legacy.DefaultUpdateCheckManager.read(DefaultUpdateCheckManager.java:396) at org.apache.maven.repository.legacy.DefaultUpdateCheckManager.readLastUpdated(DefaultUpdateCheckManager.java:323) at org.apache.maven.repository.legacy.DefaultUpdateCheckManager.readLastUpdated(DefaultUpdateCheckManager.java:159) at org.apache.maven.repository.legacy.DefaultUpdateCheckManager.isUpdateRequired(DefaultUpdateCheckManager.java:148) at org.apache.maven.artifact.repository.metadata.DefaultRepositoryMetadataManager.resolve(DefaultRepositoryMetadataManager.java:127) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveAvailableVersions(MavenMetadataSource.java:435) at org.apache.maven.project.artifact.MavenMetadataSource.retrieveAvailableVersions(MavenMetadataSource.java:425) at org.codehaus.mojo.versions.api.DefaultVersionsHelper.lookupArtifactVersions(DefaultVersionsHelper.java:229) at org.codehaus.mojo.versions.api.DefaultVersionsHelper.lookupPluginUpdates(DefaultVersionsHelper.java:727) at org.codehaus.mojo.versions.api.DefaultVersionsHelper.lookupPluginsUpdates(DefaultVersionsHelper.java:706) at org.codehaus.mojo.versions.PluginUpdatesReport.doGenerateReport(PluginUpdatesReport.java:103) at org.codehaus.mojo.versions.AbstractVersionsReport.executeReport(AbstractVersionsReport.java:253) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:90) at org.apache.maven.plugins.site.ReportDocumentRenderer.renderDocument(ReportDocumentRenderer.java:228) at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderModule(DefaultSiteRenderer.java:319) at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.render(DefaultSiteRenderer.java:135) at org.apache.maven.plugins.site.SiteMojo.renderLocale(SiteMojo.java:175) at org.apache.maven.plugins.site.SiteMojo.execute(SiteMojo.java:138) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:133) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:108) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:76) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:116) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213) at org.apache.maven.cli.MavenCli.main(MavenCli.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) {noformat} I traced the bug to the {{DefaultUpdateCheckManager.read}} method. [Line 396|http://maven.apache.org/ref/3.2.1/apidocs/src-html/org/apache/maven/repository/legacy/DefaultUpdateCheckManager.html#line.396] is releasing a FileLock, but the {{FileInputStream}} has already been closed by [Line 381|http://maven.apache.org/ref/3.2.1/apidocs/src-html/org/apache/maven/repository/legacy/DefaultUpdateCheckManager.html#line.381].
    via by Anthony Whitford,
  • Hey, I've seen that this error has been around for some time and I hope that this description is complete and will be helpful in reproducing and fixing. System: a logback.xml with a file appender with prudent=true. the log path should be to a volume with little available space. Scenario: start writing to the log file. as soon as the space is depleted, errors start happening: 15:44:40,595 |-ERROR in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - IO failure while writing to file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] java.io.IOException: No space left on device at java.io.IOException: No space left on device at at java.io.FileOutputStream.writeBytes(Native Method) at at java.io.FileOutputStream.write(FileOutputStream.java:345) at at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at at ch.qos.logback.core.recovery.ResilientOutputStreamBase.flush(ResilientOutputStreamBase.java:79) [...] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Attempting to recover from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Recovered from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - IO failure in appender java.nio.channels.ClosedChannelException at java.nio.channels.ClosedChannelException at at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58) and then: 15:44:51,069 |-WARN in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - Attempted to append to non started appender [MY_LOG]. Debugging: i have investigated this issue and found the culprit to be line 204 in FileAppender: finally { if (fileLock != null) { ---> fileLock.release(); } [...] the problem is that when the original IOException was thrown, the channel was closed as part of the attemptRecovery method in ResilientOutputStreamBase. the release will throw a ClosedChannelException if the file channel is closed. the appender is then set to started=false in OutputStreamAppender subAppend Method and stays this way until restarted. Fix suggestion: the easy fix here is changing the guard of the release: finally { if (fileLock != null && fileChannel.isOpen()) { fileLock.release(); } [...] this prevents the release from throwing the exception. for now, an easy mitigation (if possible) is to set prudent=false. hope this helps and the bug will be fixed.
    via by Nadav Wexler,
  • GUI, Commands Scripts
    via GitHub by ChristianGruen
    ,
    • java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[?:1.8.0_91] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[?:1.8.0_91] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[?:1.8.0_91] at java.lang.reflect.Method.invoke(Method.java:498)[?:1.8.0_91] at com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler.stop(Lifecycle.java:337)[java-util-0.27.4.jar:?] at com.metamx.common.lifecycle.Lifecycle.stop(Lifecycle.java:261)[java-util-0.27.4.jar:?] at io.druid.cli.CliPeon$2.run(CliPeon.java:241)[druid-services-0.8.3.jar:0.8.3] at java.lang.Thread.run(Thread.java:745)[?:1.8.0_91] Caused by: java.nio.channels.ClosedChannelException at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58)[?:1.8.0_91] at io.druid.indexing.worker.executor.ExecutorLifecycle.stop(ExecutorLifecycle.java:220)[druid-indexing-service-0.8.3.jar:0.8.3] ... 8 more

    Users with the same issue

    Adarro
    1832 times, last one,