java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).

GitHub | Gauravshah | 4 months ago
  1. 0

    `aws_iam_role` not being used

    GitHub | 4 months ago | Gauravshah
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
  2. 0

    IAM Role not taken into account

    GitHub | 2 months ago | borisclemencon
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties
  3. 0

    GitHub comment 33#70954139

    GitHub | 2 years ago | croblee
    com.facebook.presto.spi.PrestoException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark submit cluster mode from s3

    Stack Overflow | 5 months ago | ashic
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
  6. 0

    Apache Spark User List - Problem reading from S3 in standalone application

    nabble.com | 1 year ago
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.IllegalArgumentException

      AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).

      at com.databricks.spark.redshift.S3Credentials.initialize()
    2. com.databricks.spark
      AWSCredentialsUtils$$anonfun$load$1.apply
      1. com.databricks.spark.redshift.S3Credentials.initialize(S3Credentials.java:67)
      2. com.databricks.spark.redshift.AWSCredentialsUtils$.com$databricks$spark$redshift$AWSCredentialsUtils$$loadFromURI(AWSCredentialsUtils.scala:60)
      3. com.databricks.spark.redshift.AWSCredentialsUtils$$anonfun$load$1.apply(AWSCredentialsUtils.scala:48)
      4. com.databricks.spark.redshift.AWSCredentialsUtils$$anonfun$load$1.apply(AWSCredentialsUtils.scala:48)
      4 frames
    3. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:121)
      1 frame
    4. com.databricks.spark
      DefaultSource.createRelation
      1. com.databricks.spark.redshift.AWSCredentialsUtils$.load(AWSCredentialsUtils.scala:48)
      2. com.databricks.spark.redshift.RedshiftWriter.saveToRedshift(RedshiftWriter.scala:338)
      3. com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:106)
      3 frames
    5. org.apache.spark
      DataSource.write
      1. org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:429)
      1 frame
    6. Spark Project SQL
      DataFrameWriter.save
      1. org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
      1 frame
    7. com.poshmark.spark
      RedshiftBasin$$anonfun$kinesisBasinFunction$1.apply
      1. com.poshmark.spark.helpers.Redshift$.writeDF(Redshift.scala:74)
      2. com.poshmark.spark.streaming.RedshiftBasin$$anonfun$kinesisBasinFunction$1.apply(RedshiftBasin.scala:43)
      3. com.poshmark.spark.streaming.RedshiftBasin$$anonfun$kinesisBasinFunction$1.apply(RedshiftBasin.scala:15)
      3 frames
    8. Spark Project Streaming
      ForEachDStream$$anonfun$1.apply
      1. org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
      2. org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
      3. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
      4. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
      5. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
      6. org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
      7. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
      8. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
      9. org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
      9 frames
    9. Scala
      Try$.apply
      1. scala.util.Try$.apply(Try.scala:192)
      1 frame
    10. Spark Project Streaming
      JobScheduler$JobHandler$$anonfun$run$1.apply
      1. org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
      2. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:245)
      3. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:245)
      4. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:245)
      4 frames
    11. Scala
      DynamicVariable.withValue
      1. scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
      1 frame
    12. Spark Project Streaming
      JobScheduler$JobHandler.run
      1. org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:244)
      1 frame
    13. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames