java.lang.IllegalArgumentException

Error constructing DecadentRead from Read({ : , : { : , : , : , : , : , : , : }, : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : }) at org.bdgenomics.adam.rich.DecadentRead$.apply(DecadentRead.scala: ) at org.bdgenomics.adam.rich.DecadentRead$.apply(DecadentRead.scala: ) at org.bdgenomics.adam.rich.DecadentRead$$anonfun$cloy$ apply(DecadentRead.scala: ) at org.bdgenomics.adam.rich.DecadentRead$$anonfun$cloy$ apply(DecadentRead.scala: ) at scala.collection.Iterator$$anon$ next(Iterator.scala: ) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala: ) at org.apache.spark.rdd.RDD$$anonfun$count$ apply(RDD.scala: ) at org.apache.spark.rdd.RDD$$anonfun$count$ apply(RDD.scala: ) at org.apache.spark.SparkContext$$anonfun$runJob$ apply(SparkContext.scala: ) at org.apache.spark.SparkContext$$anonfun$runJob$ apply(SparkContext.scala: ) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala: ) at org.apache.spark.scheduler.Task.run(Task.scala: ) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala: )

Samebug tips0

We couldn't find tips for this exception.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web52

Stack trace

  • java.lang.IllegalArgumentException: Error constructing DecadentRead from Read({ : , : { : , : , : , : , : , : , : }, : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : }) at org.bdgenomics.adam.rich.DecadentRead$.apply(DecadentRead.scala: ) at org.bdgenomics.adam.rich.DecadentRead$.apply(DecadentRead.scala: ) at org.bdgenomics.adam.rich.DecadentRead$$anonfun$cloy$ apply(DecadentRead.scala: ) at org.bdgenomics.adam.rich.DecadentRead$$anonfun$cloy$ apply(DecadentRead.scala: ) at scala.collection.Iterator$$anon$ next(Iterator.scala: ) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala: ) at org.apache.spark.rdd.RDD$$anonfun$count$ apply(RDD.scala: ) at org.apache.spark.rdd.RDD$$anonfun$count$ apply(RDD.scala: ) at org.apache.spark.SparkContext$$anonfun$runJob$ apply(SparkContext.scala: ) at org.apache.spark.SparkContext$$anonfun$runJob$ apply(SparkContext.scala: ) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala: ) at org.apache.spark.scheduler.Task.run(Task.scala: ) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala: ) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)

Write tip

You have a different solution? A short tip here would help you and many other users who saw this issue last week.

Users with the same issue

Sooraj SinghSooraj Singh
2 times, last one
jkjk
Once,