org.apache.hadoop.fs.s3a.AWSS3IOException: purging multipart uploads on landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended Request ID: uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B)

Apache's JIRA Issue Tracker | Steve Loughran | 10 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    S3A doesn't auth with S3 frankfurt. This installation only supports v4 API. There are some JVM options which should set this, but even they don't appear to be enough. It appears that we have to allow the s3a client to change the endpoint with which it authenticates from a generic "AWS S3" to a frankfurt-specific one.

    Apache's JIRA Issue Tracker | 10 months ago | Steve Loughran
    org.apache.hadoop.fs.s3a.AWSS3IOException: purging multipart uploads on landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended Request ID: uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B)

    Root Cause Analysis

    1. org.apache.hadoop.fs.s3a.AWSS3IOException

      purging multipart uploads on landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended Request ID: uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B)

      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse()
    2. AWS SDK for Java - Core
      AmazonHttpClient.execute
      1. com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
      2. com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
      3. com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
      4. com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
      4 frames
    3. AWS Java SDK for Amazon S3
      TransferManager.abortMultipartUploads
      1. com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
      2. com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
      3. com.amazonaws.services.s3.AmazonS3Client.listMultipartUploads(AmazonS3Client.java:2796)
      4. com.amazonaws.services.s3.transfer.TransferManager.abortMultipartUploads(TransferManager.java:1217)
      4 frames
    4. Apache Hadoop Amazon Web Services support
      S3AFileSystem.initialize
      1. org.apache.hadoop.fs.s3a.S3AFileSystem.initMultipartUploads(S3AFileSystem.java:454)
      2. org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:289)
      2 frames
    5. Hadoop
      FileSystem.newInstance
      1. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2715)
      2. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:96)
      3. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2749)
      4. org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2737)
      5. org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:430)
      5 frames
    6. org.apache.hadoop
      TestS3AInputStreamPerformance.openFS
      1. org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.bindS3aFS(TestS3AInputStreamPerformance.java:93)
      2. org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.openFS(TestS3AInputStreamPerformance.java:81)
      2 frames