java.io.IOException: input buffer is closed

Stack Overflow | Vijay_Shinde | 4 months ago
  1. 0

    Unpacking .tgz file in Java - Issue with Apache Compress

    Stack Overflow | 2 years ago | user2183985
    java.io.IOException: Error detected parsing the header
  2. 0

    With version 1.8 of commons-compress it's no longer possible to decompress files from an archive if the archive contains entries having null (or being empty?) set as username and/or usergroup. With version 1.7 this still worked now I get this exception: {code} java.io.IOException: Error detected parsing the header at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:249) at TestBed.AppTest.extractNoFileOwner(AppTest.java:30) Caused by: java.lang.IllegalArgumentException: Invalid byte 32 at offset 7 in ' {NUL}' len=8 at org.apache.commons.compress.archivers.tar.TarUtils.parseOctal(TarUtils.java:134) at org.apache.commons.compress.archivers.tar.TarUtils.parseOctalOrBinary(TarUtils.java:173) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:953) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:940) at org.apache.commons.compress.archivers.tar.TarArchiveEntry.<init>(TarArchiveEntry.java:324) at org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:247) ... 27 more {code} This exception leads to my suspision that the regression was introduced with the fix for this ticket COMPRESS-262, which has a nearly identical exception provided. Some test code you can run to verify it: {code} package TestBed; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import org.apache.commons.compress.archivers.tar.TarArchiveEntry; import org.apache.commons.compress.archivers.tar.TarArchiveInputStream; import org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream; import org.junit.Test; /** * Unit test for simple App. */ public class AppTest { @Test public void extractNoFileOwner() { TarArchiveInputStream tarInputStream = null; try { tarInputStream = new TarArchiveInputStream( new GzipCompressorInputStream( new FileInputStream( new File( "/home/pknobel/redis-dist-2.8.3_1-linux.tar.gz" ) ) ) ); TarArchiveEntry entry; while ( ( entry = tarInputStream.getNextTarEntry() ) != null ) { System.out.println( entry.getName() ); System.out.println(entry.getUserName()+"/"+entry.getGroupName()); } } catch ( FileNotFoundException e ) { e.printStackTrace(); } catch ( IOException e ) { e.printStackTrace(); } } } {code} With 1.7 the TestCase outputed this: {code} redis-dist-2.8.3_1/bin/ / redis-dist-2.8.3_1/bin/redis-server jenkins/jenkins redis-dist-2.8.3_1/bin/redis-cli jenkins/jenkins {code} With 1.8 it's failing once it reaches the null valued entry, which is the first. The archive is created using maven assembly plugin, and I tried the same with maven ant task. Both generating an archive with not set username and groups for at least some entries. You can download the archive from http://heli0s.darktech.org/redis/2.8.3_1/redis-dist-2.8.3_1-linux.tar.gz If you run a tar -tvzf on the file you see this report: {code} drwxr-xr-x 0/0 0 2014-04-18 09:43 redis-dist-2.8.3_1-SNAPSHOT/bin/ -rwxr-xr-x pknobel/pknobel 3824588 2014-01-02 14:58 redis-dist-2.8.3_1-SNAPSHOT/bin/redis-cli -rwxr-xr-x pknobel/pknobel 5217234 2014-01-02 14:58 redis-dist-2.8.3_1-SNAPSHOT/bin/redis-server {code} The user 0/0 probably indicates that it's not set although it's the root user id. A correctly root user file would show up as root/root

    Apache's JIRA Issue Tracker | 3 years ago | Philipp Knobel
    java.io.IOException: Error detected parsing the header
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    addArchivedFileSet to ZipArchiver does not work with tar archives

    GitHub | 1 year ago | plamentotev
    org.codehaus.plexus.archiver.ArchiverException: Problem creating zip: Execution exception (and the archive is probably corrupt but I could not delete it)
  5. 0

    error on zipfile decompression (on method closeEntry)

    Oracle Community | 2 decades ago | 843802
    java.io.IOException: Push back buffer is full

    3 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      input buffer is closed

      at org.apache.commons.compress.archivers.tar.TarBuffer.readRecord()
    2. Apache Commons Compress
      TarArchiveInputStream.getNextTarEntry
      1. org.apache.commons.compress.archivers.tar.TarBuffer.readRecord(TarBuffer.java:190)
      2. org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getRecord(TarArchiveInputStream.java:302)
      3. org.apache.commons.compress.archivers.tar.TarArchiveInputStream.getNextTarEntry(TarArchiveInputStream.java:230)
      3 frames
    3. com.lsr
      TarMapper.call
      1. com.lsr.TarMapper.call(TarMapper.java:53)
      2. com.lsr.TarMapper.call(TarMapper.java:1)
      2 frames
    4. Spark
      JavaRDDLike$$anonfun$fn$1$1.apply
      1. org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:129)
      2. org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:129)
      2 frames
    5. Scala
      Iterator$$anon$11.hasNext
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      2. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      2 frames
    6. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
      2. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      3. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      4. org.apache.spark.scheduler.Task.run(Task.scala:89)
      5. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      5 frames
    7. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames