java.lang.UnsatisfiedLinkError: unknown

GitHub | magg | 1 week ago
  1. 0

    GitHub comment 9#271895627

    GitHub | 1 week ago | magg
    java.lang.UnsatisfiedLinkError: unknown
  2. 0

    Unable to run the example

    GitHub | 3 months ago | czheo
    java.lang.NumberFormatException: For input string: "java/lang/ClassNotFoundExceptionava/lang/NoSunjava/lang/NoSuchMethodException"
  3. 0

    Binary Type not Supported in a Succinct Data Frame

    GitHub | 2 months ago | kant111
    java.lang.IllegalArgumentException: Unexpected type. BinaryType
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Minimal example: {noformat} rdd <- textFile(sc, "./README.md") lengths <- lapply(rdd, function(x) { length(x) }) take(lengths, 5) # works lengths10 <- lapply(lengths, function(x) { x + 10}) take(lengths10, 2) # breaks {noformat} Stacktrace: {noformat} Exception in thread "stdin writer for R" java.lang.ClassCastException: java.lang.String cannot be cast to [B at edu.berkeley.cs.amplab.sparkr.RRDD$$anon$4$$anonfun$run$3.apply(RRDD.scala:312) at edu.berkeley.cs.amplab.sparkr.RRDD$$anon$4$$anonfun$run$3.apply(RRDD.scala:310) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at edu.berkeley.cs.amplab.sparkr.RRDD$$anon$4.run(RRDD.scala:310) Error in readBin(con, raw(), as.integer(dataLen), endian = "big") : invalid 'n' argument Calls: unserialize -> readRawLen -> readBin Execution halted 14/11/17 12:22:31 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1) java.lang.NullPointerException at edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:128) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.scheduler.Task.run(Task.scala:54) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:695) 14/11/17 12:22:31 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.NullPointerException: edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:128) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) org.apache.spark.rdd.RDD.iterator(RDD.scala:229) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) org.apache.spark.scheduler.Task.run(Task.scala:54) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:695) 14/11/17 12:22:31 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job Error in .jcall(jrdd, "[Ljava/util/List;", "collectPartitions", .jarray(as.integer(index))) : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.NullPointerException: edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:128) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) org.apache.spark.rdd.RDD.iterator(RDD.scala:229) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) org.apache.spark.scheduler.Task.run(Task.scala:54) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:695) {noformat} This is likely related to [this line|https://github.com/amplab-extras/SparkR-pkg/blob/master/pkg/R/RDD.R#L122], changing which to FALSE seems to eliminate the issue. One workaround is to cache the `lengths` RDD first. We should figure out what exactly the issue is & perhaps in the mean time add some more doc in the code on how pipelining works (e.g. state invariants on some key variables).

    JIRA | 2 years ago | Zongheng Yang
    java.lang.ClassCastException: java.lang.String cannot be cast to [B
  6. 0

    Minimal example: {noformat} rdd <- textFile(sc, "./README.md") lengths <- lapply(rdd, function(x) { length(x) }) take(lengths, 5) # works lengths10 <- lapply(lengths, function(x) { x + 10}) take(lengths10, 2) # breaks {noformat} Stacktrace: {noformat} Exception in thread "stdin writer for R" java.lang.ClassCastException: java.lang.String cannot be cast to [B at edu.berkeley.cs.amplab.sparkr.RRDD$$anon$4$$anonfun$run$3.apply(RRDD.scala:312) at edu.berkeley.cs.amplab.sparkr.RRDD$$anon$4$$anonfun$run$3.apply(RRDD.scala:310) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at edu.berkeley.cs.amplab.sparkr.RRDD$$anon$4.run(RRDD.scala:310) Error in readBin(con, raw(), as.integer(dataLen), endian = "big") : invalid 'n' argument Calls: unserialize -> readRawLen -> readBin Execution halted 14/11/17 12:22:31 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1) java.lang.NullPointerException at edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:128) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) at org.apache.spark.scheduler.Task.run(Task.scala:54) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:695) 14/11/17 12:22:31 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.NullPointerException: edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:128) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) org.apache.spark.rdd.RDD.iterator(RDD.scala:229) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) org.apache.spark.scheduler.Task.run(Task.scala:54) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:695) 14/11/17 12:22:31 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job Error in .jcall(jrdd, "[Ljava/util/List;", "collectPartitions", .jarray(as.integer(index))) : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.NullPointerException: edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:128) org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) org.apache.spark.rdd.RDD.iterator(RDD.scala:229) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) org.apache.spark.scheduler.Task.run(Task.scala:54) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:695) {noformat} This is likely related to [this line|https://github.com/amplab-extras/SparkR-pkg/blob/master/pkg/R/RDD.R#L122], changing which to FALSE seems to eliminate the issue. One workaround is to cache the `lengths` RDD first. We should figure out what exactly the issue is & perhaps in the mean time add some more doc in the code on how pipelining works (e.g. state invariants on some key variables).

    JIRA | 2 years ago | Zongheng Yang
    java.lang.ClassCastException: java.lang.String cannot be cast to [B

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.UnsatisfiedLinkError

      unknown

      at jnr.ffi.provider.jffi.AsmRuntime.newUnsatisifiedLinkError()
    2. JRuby Main Maven Artifact
      AsmRuntime.newUnsatisifiedLinkError
      1. jnr.ffi.provider.jffi.AsmRuntime.newUnsatisifiedLinkError(AsmRuntime.java:40)
      1 frame
    3. z3
      Z3Interface$jnr$ffi$1.Z3_mk_config
      1. z3.Z3Interface$jnr$ffi$1.Z3_mk_config(Unknown Source)
      1 frame
    4. z3.scala
      Z3Config.<init>
      1. z3.scala.Z3Config.<init>(Z3Config.scala:6)
      1 frame
    5. edu.berkeley.cs
      Solver$$anonfun$6.apply
      1. edu.berkeley.cs.boom.molly.derivations.Z3Solver$.solve(Z3Solver.scala:35)
      2. edu.berkeley.cs.boom.molly.derivations.Solver$$anonfun$6.apply(Solver.scala:50)
      3. edu.berkeley.cs.boom.molly.derivations.Solver$$anonfun$6.apply(Solver.scala:50)
      3 frames
    6. Scala
      List.flatMap
      1. scala.collection.immutable.List.flatMap(List.scala:327)
      1 frame
    7. edu.berkeley.cs
      SyncFTChecker$$anonfun$main$2.apply
      1. edu.berkeley.cs.boom.molly.derivations.Solver$class.solve(Solver.scala:50)
      2. edu.berkeley.cs.boom.molly.derivations.Z3Solver$.solve(Z3Solver.scala:11)
      3. edu.berkeley.cs.boom.molly.Verifier.verify(Verifier.scala:174)
      4. edu.berkeley.cs.boom.molly.SyncFTChecker$.check(SyncFTChecker.scala:82)
      5. edu.berkeley.cs.boom.molly.SyncFTChecker$$anonfun$main$2.apply(SyncFTChecker.scala:105)
      6. edu.berkeley.cs.boom.molly.SyncFTChecker$$anonfun$main$2.apply(SyncFTChecker.scala:102)
      6 frames
    8. Scala
      Option.map
      1. scala.Option.map(Option.scala:146)
      1 frame
    9. edu.berkeley.cs
      SyncFTChecker.main
      1. edu.berkeley.cs.boom.molly.SyncFTChecker$.main(SyncFTChecker.scala:102)
      2. edu.berkeley.cs.boom.molly.SyncFTChecker.main(SyncFTChecker.scala)
      2 frames
    10. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:498)[trace]
      4 frames