java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).

crunch-user | Yan Yang | 1 year ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    Sparkpipeline hit credentials issue when trying to write to S3

    crunch-user | 1 year ago | Yan Yang
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
  2. 0

    Sparkpipeline hit credentials issue when trying to write to S3

    crunch-user | 1 year ago | Yan Yang
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
  3. 0

    Re: Sparkpipeline hit credentials issue when trying to write to S3

    crunch-user | 1 year ago | Jeff Quinn
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: Sparkpipeline hit credentials issue when trying to write to S3

    crunch-user | 1 year ago | Yan Yang
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
  6. 0

    Re: Sparkpipeline hit credentials issue when trying to write to S3

    crunch-user | 1 year ago | Jeff Quinn
    java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.IllegalArgumentException

    AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).

    at org.apache.hadoop.fs.s3.S3Credentials.initialize()
  2. Hadoop
    Jets3tNativeFileSystemStore.initialize
    1. org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:70)
    2. org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:80)
    2 frames
  3. Java RT
    Method.invoke
    1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    4. java.lang.reflect.Method.invoke(Method.java:606)
    4 frames
  4. Hadoop
    Path.getFileSystem
    1. org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    2. org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    3. org.apache.hadoop.fs.s3native.$Proxy9.initialize(Unknown Source)
    4. org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:326)
    5. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2644)
    6. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)
    7. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2678)
    8. org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2660)
    9. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:374)
    10. org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    10 frames
  5. Apache Avro Mapred API
    FsInput.<init>
    1. org.apache.avro.mapred.FsInput.<init>(FsInput.java:37)
    1 frame
  6. org.apache.crunch
    CrunchRecordReader.initialize
    1. org.apache.crunch.types.avro.AvroRecordReader.initialize(AvroRecordReader.java:54)
    2. org.apache.crunch.impl.mr.run.CrunchRecordReader.initialize(CrunchRecordReader.java:150)
    2 frames
  7. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:153)
    2. org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:124)
    3. org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
    4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    5. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    8. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    9. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    11. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    12. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    14. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    15. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    16. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    17. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    18. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    19. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    20. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    21. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    22. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    23. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    24. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    25. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    26. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    27. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    28. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    29. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    30. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    31. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    32. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    33. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    34. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    35. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    36. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    37. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    38. org.apache.spark.scheduler.Task.run(Task.scala:88)
    39. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    39 frames
  8. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames