java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.

Stack Overflow | user6742737 | 4 months ago
  1. 0

    Running Spark on m4 instead of m3 on AWS

    Stack Overflow | 4 months ago | user6742737
    java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.
  2. 0

    GitHub comment 209#98161267

    GitHub | 2 years ago | dsdinter
    java.lang.IllegalArgumentException: You must specify at least 1 executor! Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.
  3. 0

    amazon emr spark submission from S3 not working

    Stack Overflow | 5 months ago | Gopala
    java.lang.IllegalArgumentException: Unknown/unsupported param List(--executor-memory, 0.5g, --executor-cores, 2, --primary-py-file, s3://<mybucketname>/mypythonfile.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) . . .
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Problem running SparkBWA

    GitHub | 5 months ago | Asmaa-Ali
    java.lang.IllegalArgumentException: Required executor memory (1500+8704 MB) is above the max threshold (8192 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
  6. 0

    Solr 4: disable compression on stored fields: how to actually configure custom codec?

    Stack Overflow | 3 years ago | Shivan Dragon
    java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      Unknown/unsupported param List(--executor-cores, , --files, s3://pythonpicode/PythonPi.py, --primary-py-file, PythonPi.py, --class, org.apache.spark.deploy.PythonRunner) Usage: org.apache.spark.deploy.yarn.Client [options] Options: --jar JAR_PATH Path to your application's JAR file (required in yarn-cluster mode) --class CLASS_NAME Name of your application's main class (required) --primary-py-file A main Python file --arg ARG Argument to be passed to your application's main class. Multiple invocations are possible, each will be passed in order. --num-executors NUM Number of executors to start (Default: 2) --executor-cores NUM Number of cores per executor (Default: 1). --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb) --driver-cores NUM Number of cores used by the driver (Default: 1). --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G) --name NAME The name of your application (Default: Spark) --queue QUEUE The hadoop queue to use for allocation requests (Default: 'default') --addJars jars Comma separated list of local jars that want SparkContext.addJar to work with. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files files Comma separated list of files to be distributed with the job. --archives archives Comma separated list of archives to be distributed with the job.

      at org.apache.spark.deploy.yarn.ClientArguments.parseArgs()
    2. Spark Project YARN Stable API
      Client.main
      1. org.apache.spark.deploy.yarn.ClientArguments.parseArgs(ClientArguments.scala:228)
      2. org.apache.spark.deploy.yarn.ClientArguments.<init>(ClientArguments.scala:56)
      3. org.apache.spark.deploy.yarn.Client$.main(Client.scala:646)
      4. org.apache.spark.deploy.yarn.Client.main(Client.scala)
      4 frames
    3. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    4. Spark
      SparkSubmit.main
      1. org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
      2. org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
      3. org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
      4. org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
      5. org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
      5 frames