Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by ooliynyk
, 1 year ago
The provided VCF file is malformed at approximately line number 1787: The reference allele cannot be missing
via GitHub by inti
, 2 years ago
The provided VCF file is malformed at approximately line number 108: unparsable vcf record with allele S, for input source: /media/TeraData/ipedroso/ANALYSES/DEKKERA/variation/AWRI1499_ref/awri1499_vsawri1499Ref/work/freebayes/AWRI1499_contig1_scaffold1/1-AWRI1499_contig1_scaffold1_0_12137.vcf.gz
via GitHub by razZ0r
, 1 year ago
The provided VCF file is malformed at approximately line number 54693: BUG: log10PError cannot be > 0 : 1.0, for input source: /pico/scra tch/userexternal/esebesty/bcbio_run/uuid_batch5932/work/vardict/21/LUAD_5932-21_0_41399576.vcf.gz
via GitHub by whitejbiii
, 2 years ago
The provided VCF file is malformed at approximately line number 649: unparsable vcf record with allele *, for input source: /media/jawhite/Data/662-exome-trio/work/joint/gatk-haplotype-joint/trio-bwa-j/1/trio-bwa-j-1_0_15543565.vcf.gz
via GitHub by heuermh
, 1 year ago
The provided VCF file is malformed at approximately line number 15364: Duplicate allele added to VariantContext: C
htsjdk.tribble.TribbleException: The provided VCF file is malformed at approximately line number 1787: The reference allele cannot be missing at htsjdk.variant.vcf.AbstractVCFCodec.generateException(AbstractVCFCodec.java:783) at htsjdk.variant.vcf.AbstractVCFCodec.checkAllele(AbstractVCFCodec.java:572) at htsjdk.variant.vcf.AbstractVCFCodec.parseAlleles(AbstractVCFCodec.java:531) at htsjdk.variant.vcf.AbstractVCFCodec.parseVCFLine(AbstractVCFCodec.java:336) at htsjdk.variant.vcf.AbstractVCFCodec.decodeLine(AbstractVCFCodec.java:279) at htsjdk.variant.vcf.AbstractVCFCodec.decode(AbstractVCFCodec.java:257) at org.seqdoop.hadoop_bam.VCFRecordReader.nextKeyValue(VCFRecordReader.java:144) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:163) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1034) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1034) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1034) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1206) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1042) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1014) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)