Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by Cristian COLA, 2 years ago
groovy.lang.MetaClassImpl. hasCustomStaticInvokeMethod()Z
via Google Groups by Jen, 1 year ago
groovy.lang.MetaClassImpl.hasCustomStaticInvokeMethod()Z
via Stack Overflow by Chris Webster
, 2 years ago
groovy.lang.MetaClassImpl.hasCustomStaticInvokeMethod()Z
java.lang.NoSuchMethodError: groovy.lang.MetaClassImpl.hasCustomStaticInvokeMethod()Z at org.codehaus.groovy.vmplugin.v7.Selector$MethodSelector.chooseMeta(Selector.java:553) at org.codehaus.groovy.vmplugin.v7.Selector$MethodSelector.setCallSiteTarget(Selector.java:954) at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:214) at org.apache.tinkerpop.gremlin.groovy.loaders.GremlinLoader.load(GremlinLoader.groovy:27) at org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine.<init>(GremlinGroovyScriptEngine.java:189) at org.apache.tinkerpop.gremlin.hadoop.structure.io.script.ScriptRecordReader.initialize(ScriptRecordReader.java:68) at org.apache.tinkerpop.gremlin.hadoop.structure.io.script.ScriptInputFormat.createRecordReader(ScriptInputFormat.java:45) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:151) at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:124) at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)