java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.

Stack Overflow | user6742737 | 6 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Running Spark on m4 instead of m3 on AWS

    Stack Overflow | 6 months ago | user6742737
    java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.
  2. 0

    output file not saved on my bucket, in AWS s3

    Stack Overflow | 1 month ago | Ray.R.Chua
    java.lang.IllegalArgumentException: Required executor memory (20480+2048 MB) is above the max threshold (11520 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.

      at org.apache.spark.deploy.yarn.ClientArguments.parseArgs()
    2. Spark Project YARN Stable API
      Client.main
      1. org.apache.spark.deploy.yarn.ClientArguments.parseArgs(ClientArguments.scala:228)
      2. org.apache.spark.deploy.yarn.ClientArguments.<init>(ClientArguments.scala:56)
      3. org.apache.spark.deploy.yarn.Client$.main(Client.scala:646)
      4. org.apache.spark.deploy.yarn.Client.main(Client.scala)
      4 frames
    3. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    4. Spark
      SparkSubmit.main
      1. org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
      2. org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
      3. org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
      4. org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
      5. org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      5 frames