Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,

    Provide a working directory instead of pointing it to a file or an empty directory in process builder.

  2. ,
    Expert tip

    This is a bug in some versions of the Arduino IDE. Try updating to the version 1.6.12 or further.

Solutions on the web

via JIRA by May, 1 year ago
via apache.org by Unknown author, 2 years ago
Cannot run program "Rscript": java.io.IOException: error=2, No such file or directory
via JIRA by May, 2 years ago
via Google Groups by Unknown author, 8 months ago
Cannot run program "C:\Program Files\Java\jdk1.6.0_20/bin/java" (in directory "<http://build.openengsb.org/hudson/job/OpenEngSB/ws/")>: java.io.IOException: error=2, No such file or directory
via Stack Overflow by AnilCk
, 1 year ago
Cannot run program "(crontab": error=2, No >such file or directory
via Jenkins JIRA by Piotr Gorzechowski, 1 year ago
Cannot run program "sonar-runner" (in directory "/var/lib/jenkins/workspace/tnk-hub"): error=2, No such file or directory
java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)	at edu.berkeley.cs.amplab.sparkr.RRDD.compute(RRDD.scala:105)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)	at org.apache.spark.scheduler.Task.run(Task.scala:53)	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)	at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)	at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)	at java.security.AccessController.doPrivileged(Native Method)	at javax.security.auth.Subject.doAs(Subject.java:415)	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)	at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:722)