htsjdk.tribble.TribbleException: Input stream does not contain a BCF encoded file; BCF magic header info not found, at record 0 with position 0:

Google Groups | Sergei Iakhnin | 2 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Exception converting VCF to ADAM

    Google Groups | 2 years ago | Sergei Iakhnin
    htsjdk.tribble.TribbleException: Input stream does not contain a BCF encoded file; BCF magic header info not found, at record 0 with position 0:
  2. 0

    ./gatk-launch CountVariantsSpark --variant not reading gz vcf

    GitHub | 10 months ago | tushu1232
    htsjdk.tribble.TribbleException: Input stream does not contain a BCF encoded file; BCF magic header info not found, at record 0 with position 0:
  3. 0

    Exception during loading of vcf.gz file

    GitHub | 3 months ago | ddemaeyer
    htsjdk.tribble.TribbleException: Input stream does not contain a BCF encoded file; BCF magic header info not found, at record 0 with position 0:
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Importing directory of VCFs seems to fail

    GitHub | 1 year ago | fnothaft
    htsjdk.tribble.TribbleException: Input stream does not contain a BCF encoded file; BCF magic header info not found, at record 0 with position 0:

    Root Cause Analysis

    1. htsjdk.tribble.TribbleException

      Input stream does not contain a BCF encoded file; BCF magic header info not found, at record 0 with position 0:

      at htsjdk.variant.bcf2.BCF2Codec.error()
    2. HTS JDK
      BCF2Codec.readHeader
      1. htsjdk.variant.bcf2.BCF2Codec.error(BCF2Codec.java:492)
      2. htsjdk.variant.bcf2.BCF2Codec.readHeader(BCF2Codec.java:153)
      2 frames
    3. org.seqdoop.hadoop_bam
      VCFInputFormat.getSplits
      1. org.seqdoop.hadoop_bam.BCFSplitGuesser.<init>(BCFSplitGuesser.java:107)
      2. org.seqdoop.hadoop_bam.BCFSplitGuesser.<init>(BCFSplitGuesser.java:88)
      3. org.seqdoop.hadoop_bam.VCFInputFormat.addGuessedSplits(VCFInputFormat.java:254)
      4. org.seqdoop.hadoop_bam.VCFInputFormat.fixBCFSplits(VCFInputFormat.java:242)
      5. org.seqdoop.hadoop_bam.VCFInputFormat.getSplits(VCFInputFormat.java:221)
      5 frames
    4. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:98)
      2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
      3 frames
    5. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    6. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
      2. org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
      4 frames
    7. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    8. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
      2. org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
      4 frames
    9. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    10. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
      2. org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
      3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
      4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
      4 frames
    11. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    12. Spark
      InstrumentedPairRDDFunctions.saveAsNewAPIHadoopFile
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
      2. org.apache.spark.SparkContext.runJob(SparkContext.scala:1326)
      3. org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1008)
      4. org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:890)
      5. org.apache.spark.rdd.InstrumentedPairRDDFunctions.saveAsNewAPIHadoopFile(InstrumentedPairRDDFunctions.scala:487)
      5 frames
    13. org.bdgenomics.adam
      ADAMRDDFunctions$$anonfun$adamParquetSave$1.apply
      1. org.bdgenomics.adam.rdd.ADAMRDDFunctions$$anonfun$adamParquetSave$1.apply$mcV$sp(ADAMRDDFunctions.scala:73)
      2. org.bdgenomics.adam.rdd.ADAMRDDFunctions$$anonfun$adamParquetSave$1.apply(ADAMRDDFunctions.scala:58)
      3. org.bdgenomics.adam.rdd.ADAMRDDFunctions$$anonfun$adamParquetSave$1.apply(ADAMRDDFunctions.scala:58)
      3 frames
    14. Spark
      Timer.time
      1. org.apache.spark.rdd.Timer.time(Timer.scala:57)
      1 frame
    15. org.bdgenomics.adam
      Vcf2ADAM.run
      1. org.bdgenomics.adam.rdd.ADAMRDDFunctions.adamParquetSave(ADAMRDDFunctions.scala:58)
      2. org.bdgenomics.adam.rdd.ADAMRDDFunctions.adamParquetSave(ADAMRDDFunctions.scala:44)
      3. org.bdgenomics.adam.cli.Vcf2ADAM.run(Vcf2ADAM.scala:76)
      3 frames
    16. org.bdgenomics.utils
      BDGSparkCommand$class.run
      1. org.bdgenomics.utils.cli.BDGSparkCommand$class.run(BDGCommand.scala:53)
      1 frame
    17. org.bdgenomics.adam
      ADAMMain.main
      1. org.bdgenomics.adam.cli.Vcf2ADAM.run(Vcf2ADAM.scala:55)
      2. org.bdgenomics.adam.cli.ADAMMain$.main(ADAMMain.scala:105)
      3. org.bdgenomics.adam.cli.ADAMMain.main(ADAMMain.scala)
      3 frames
    18. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:497)
      4 frames
    19. Spark
      SparkSubmit.main
      1. org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
      2. org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
      3. org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      3 frames