java.lang.AssertionError: assertion failed: No predefined schema found, and no Parquet data files or summary files found under file:/tmp/out.vds/rdd.parquet.

GitHub | tpoterba | 8 months ago
tip
Do you find the tips below useful? Click on the to mark them and say thanks to poroszd . Or join the community to write better ones.
  1. 0

    VDS read fails when RDD is empty

    GitHub | 8 months ago | tpoterba
    java.lang.AssertionError: assertion failed: No predefined schema found, and no Parquet data files or summary files found under file:/tmp/out.vds/rdd.parquet.
  2. 0
    samebug tip
    Check if you use the right path
  3. 0

    Can't deploy jobserver in localhost

    GitHub | 2 years ago | prayagupd
    java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobErroredOut
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Tests Failing.

    GitHub | 3 years ago | paulmagid
    java.lang.AssertionError: assertion failed: timeout (3 seconds) while expecting 2 messages
  6. 0

    GitHub comment 72#74986726

    GitHub | 2 years ago | velvia
    java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsgClass waiting for class spark.jobserver.CommonMessages$JobErroredOut

  1. jstrayer 9 times, last 2 weeks ago
  2. tyson925 1 times, last 5 months ago
  3. rp 1 times, last 5 months ago
  4. poroszd 1 times, last 8 months ago
18 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.AssertionError

    assertion failed: No predefined schema found, and no Parquet data files or summary files found under file:/tmp/out.vds/rdd.parquet.

    at scala.Predef$.assert()
  2. Scala
    Predef$.assert
    1. scala.Predef$.assert(Predef.scala:179)
    1 frame
  3. org.apache.spark
    ParquetRelation$MetadataCache$$anonfun$13.apply
    1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$MetadataCache$$readSchema(ParquetRelation.scala:478)
    2. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache$$anonfun$13.apply(ParquetRelation.scala:404)
    3. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache$$anonfun$13.apply(ParquetRelation.scala:404)
    3 frames
  4. Scala
    Option.orElse
    1. scala.Option.orElse(Option.scala:257)
    1 frame
  5. org.apache.spark
    ParquetRelation$$anonfun$6.apply
    1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache.refresh(ParquetRelation.scala:404)
    2. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$$metadataCache$lzycompute(ParquetRelation.scala:145)
    3. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$$metadataCache(ParquetRelation.scala:143)
    4. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$6.apply(ParquetRelation.scala:196)
    5. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$6.apply(ParquetRelation.scala:196)
    5 frames
  6. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:120)
    1 frame
  7. org.apache.spark
    ParquetRelation.dataSchema
    1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.dataSchema(ParquetRelation.scala:196)
    1 frame
  8. Spark Project SQL
    HadoopFsRelation.schema
    1. org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:561)
    2. org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:560)
    2 frames
  9. org.apache.spark
    LogicalRelation.<init>
    1. org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:31)
    1 frame
  10. Spark Project SQL
    DataFrameReader.parquet
    1. org.apache.spark.sql.SQLContext.baseRelationToDataFrame(SQLContext.scala:389)
    2. org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:267)
    2 frames
  11. org.broadinstitute.hail
    Main$$anonfun$runCommands$1.apply
    1. org.broadinstitute.hail.variant.VariantSampleMatrix$.read(VariantSampleMatrix.scala:132)
    2. org.broadinstitute.hail.driver.Read$.run(Read.scala:29)
    3. org.broadinstitute.hail.driver.Read$.run(Read.scala:6)
    4. org.broadinstitute.hail.driver.Command.runCommand(Command.scala:238)
    5. org.broadinstitute.hail.driver.Main$.runCommand(Main.scala:86)
    6. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1$$anonfun$1.apply(Main.scala:111)
    7. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1$$anonfun$1.apply(Main.scala:111)
    8. org.broadinstitute.hail.Utils$.time(Utils.scala:1185)
    9. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1.apply(Main.scala:110)
    10. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1.apply(Main.scala:104)
    10 frames
  12. Scala
    ArrayOps$ofRef.foldLeft
    1. scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
    2. scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
    3. scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:108)
    3 frames
  13. org.broadinstitute.hail
    Main.main
    1. org.broadinstitute.hail.driver.Main$.runCommands(Main.scala:104)
    2. org.broadinstitute.hail.driver.Main$.main(Main.scala:275)
    3. org.broadinstitute.hail.driver.Main.main(Main.scala)
    3 frames