java.lang.AssertionError: assertion failed: No predefined schema found, and no Parquet data files or summary files found under file:/tmp/out.vds/rdd.parquet.

GitHub | tpoterba | 4 months ago
  1. 0

    VDS read fails when RDD is empty

    GitHub | 4 months ago | tpoterba
    java.lang.AssertionError: assertion failed: No predefined schema found, and no Parquet data files or summary files found under file:/tmp/out.vds/rdd.parquet.
  2. 0

    Cannot use coref

    GitHub | 3 years ago | schmmd
    java.lang.AssertionError: assertion failed: Could not find annotator for goal class cc.factorie.app.nlp.phrase.NounPhraseList, map includes class cc.factorie.app.nlp.phrase.NumberLabel, class cc.factorie.app.nlp.ner.NerTag, class cc.factorie.app.nlp.lemma.SimplifyDigitsTokenLemma, class cc.factorie.app.nlp.coref.mention.ParseBasedMentionList, class cc.factorie.app.nlp.pos.PennPosTag, class cc.factorie.app.nlp.segment.PlainNormalizedTokenString, class cc.factorie.app.nlp.Token, class cc.factorie.app.nlp.ner.BilouConllNerTag, class cc.factorie.app.nlp.coref.mention.MentionEntityType, class cc.factorie.util.coref.GenericEntityMap, class cc.factorie.app.nlp.lemma.CollapseDigitsTokenLemma, class cc.factorie.app.nlp.ner.BilouOntonotesNerTag, class cc.factorie.app.nlp.phrase.GenderLabel, class cc.factorie.app.nlp.lemma.WordNetTokenLemma, class cc.factorie.app.nlp.parse.ParseTree, class cc.factorie.app.nlp.Sentence, class cc.factorie.app.nlp.lemma.PorterTokenLemma, class cc.factorie.app.nlp.coref.mention.NerMentionList, class cc.factorie.app.nlp.lemma.LowercaseTokenLemma
  3. 0

    Error when running profile with Cypher query

    GitHub | 3 years ago | peterneubauer
    java.lang.AssertionError: assertion failed: Can't profile the same pipe twice: NullPipe(SymbolTable(Map(a -> Node, b -> Node)),Eager() Filter(pred="Property(b,name(0)) == Literal(Central)") NodeByLabel(label="Division", identifier="b") Filter(pred="Property(a,name(0)) == Literal(East)") NodeByLabel(label="Conference", identifier="a"))
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Can't profile UNION queries

    GitHub | 3 years ago | nawroth
    java.lang.AssertionError: assertion failed: Can't profile the same pipe twice
  6. 0

    Test Fails: ZMQMessage should forward ZMQ Strings and KernelMessage to Relay

    GitHub | 2 years ago | rcsenkbeil
    java.lang.AssertionError: assertion failed: expected (Vector({"msg_id":"<UUID>","username":"<USER>","session":"<SESSION>","msg_type":"clear_output","version":"<VERSION>"}, {"msg_id":"<PARENT-UUID>","username":"<PARENT-USER>","session":"<PARENT-SESSION>","msg_type":"clear_output","version":"<PARENT-VERSION>"}, {}, <CONTENT>),KernelMessage(List(<ID>),<SIGNATURE>,Header(<UUID>,<USER>,<SESSION>,clear_output,<VERSION>),Header(<PARENT-UUID>,<PARENT-USER>,<PARENT-SESSION>,clear_output,<PARENT-VERSION>),Map(timestamp -> 1419205415730),<CONTENT>)), found (Vector({"msg_id":"<UUID>","username":"<USER>","session":"<SESSION>","msg_type":"clear_output","version":"<VERSION>"}, {"msg_id":"<PARENT-UUID>","username":"<PARENT-USER>","session":"<PARENT-SESSION>","msg_type":"clear_output","version":"<PARENT-VERSION>"}, {}, <CONTENT>),KernelMessage(List(<ID>),<SIGNATURE>,Header(<UUID>,<USER>,<SESSION>,clear_output,<VERSION>),Header(<PARENT-UUID>,<PARENT-USER>,<PARENT-SESSION>,clear_output,<PARENT-VERSION>),Map(timestamp -> 1419205415729),<CONTENT>))

  1. rp 1 times, last 1 month ago
  2. poroszd 1 times, last 4 months ago
  3. rp 1 times, last 9 months ago
17 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.AssertionError

    assertion failed: No predefined schema found, and no Parquet data files or summary files found under file:/tmp/out.vds/rdd.parquet.

    at scala.Predef$.assert()
  2. Scala
    Predef$.assert
    1. scala.Predef$.assert(Predef.scala:179)
    1 frame
  3. org.apache.spark
    ParquetRelation$MetadataCache$$anonfun$13.apply
    1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$MetadataCache$$readSchema(ParquetRelation.scala:478)
    2. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache$$anonfun$13.apply(ParquetRelation.scala:404)
    3. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache$$anonfun$13.apply(ParquetRelation.scala:404)
    3 frames
  4. Scala
    Option.orElse
    1. scala.Option.orElse(Option.scala:257)
    1 frame
  5. org.apache.spark
    ParquetRelation$$anonfun$6.apply
    1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache.refresh(ParquetRelation.scala:404)
    2. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$$metadataCache$lzycompute(ParquetRelation.scala:145)
    3. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$$metadataCache(ParquetRelation.scala:143)
    4. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$6.apply(ParquetRelation.scala:196)
    5. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$6.apply(ParquetRelation.scala:196)
    5 frames
  6. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:120)
    1 frame
  7. org.apache.spark
    ParquetRelation.dataSchema
    1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.dataSchema(ParquetRelation.scala:196)
    1 frame
  8. Spark Project SQL
    HadoopFsRelation.schema
    1. org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:561)
    2. org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:560)
    2 frames
  9. org.apache.spark
    LogicalRelation.<init>
    1. org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:31)
    1 frame
  10. Spark Project SQL
    DataFrameReader.parquet
    1. org.apache.spark.sql.SQLContext.baseRelationToDataFrame(SQLContext.scala:389)
    2. org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:267)
    2 frames
  11. org.broadinstitute.hail
    Main$$anonfun$runCommands$1.apply
    1. org.broadinstitute.hail.variant.VariantSampleMatrix$.read(VariantSampleMatrix.scala:132)
    2. org.broadinstitute.hail.driver.Read$.run(Read.scala:29)
    3. org.broadinstitute.hail.driver.Read$.run(Read.scala:6)
    4. org.broadinstitute.hail.driver.Command.runCommand(Command.scala:238)
    5. org.broadinstitute.hail.driver.Main$.runCommand(Main.scala:86)
    6. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1$$anonfun$1.apply(Main.scala:111)
    7. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1$$anonfun$1.apply(Main.scala:111)
    8. org.broadinstitute.hail.Utils$.time(Utils.scala:1185)
    9. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1.apply(Main.scala:110)
    10. org.broadinstitute.hail.driver.Main$$anonfun$runCommands$1.apply(Main.scala:104)
    10 frames
  12. Scala
    ArrayOps$ofRef.foldLeft
    1. scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
    2. scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
    3. scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:108)
    3 frames
  13. org.broadinstitute.hail
    Main.main
    1. org.broadinstitute.hail.driver.Main$.runCommands(Main.scala:104)
    2. org.broadinstitute.hail.driver.Main$.main(Main.scala:275)
    3. org.broadinstitute.hail.driver.Main.main(Main.scala)
    3 frames