Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by user6742737
, 1 year ago
Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM
via Stack Overflow by facha
, 2 years ago
Explicitly setting the number of executors is not compatible with spark.dynamicAllocation.enabled!
java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner)
Usage: org.apache.spark.deploy.yarn.Client [options]
Options:
  --jar JAR_PATH           Path to your application's JAR file (required in yarn-cluster
                           mode)
  --class CLASS_NAME       Name of your application's main class (required)
  --primary-py-file        A main Python file
  --arg ARG                Argument to be passed to your application's main class.
                           Multiple invocations are possible, each will be passed in order.
  --num-executors NUM      Number of executors to start (Default: 2)
  --executor-cores NUM     Number of cores per executor (Default: 1).
  --driver-memory MEM      Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb)
  --driver-cores NUM       Number of cores used by the driver (Default: 1).
  --executor-memory MEM    Memory per executor (e.g. 1000M, 2G) (Default: 1G)
  --name NAME              The name of your application (Default: Spark)
  --queue QUEUE            The hadoop queue to use for allocation requests (Default:
                           'default')
  --addJars jars           Comma separated list of local jars that want SparkContext.addJar
                           to work with.
  --py-files PY_FILES      Comma-separated list of .zip, .egg, or .py files to
                           place on the PYTHONPATH for Python apps.
  --files files            Comma separated list of files to be distributed with the job.
  --archives archives      Comma separated list of archives to be distributed with the job.	at org.apache.spark.deploy.yarn.ClientArguments.parseArgs(ClientArguments.scala:228)	at org.apache.spark.deploy.yarn.ClientArguments.(ClientArguments.scala:56)	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:646)	at org.apache.spark.deploy.yarn.Client.main(Client.scala)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:606)	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)