org.apache.hadoop.fs.s3a.AWSS3IOException: purging multipart uploads on landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended Request ID: uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B)

Apache's JIRA Issue Tracker | Steve Loughran | 5 months ago
  1. 0

    S3A doesn't auth with S3 frankfurt. This installation only supports v4 API. There are some JVM options which should set this, but even they don't appear to be enough. It appears that we have to allow the s3a client to change the endpoint with which it authenticates from a generic "AWS S3" to a frankfurt-specific one.

    Apache's JIRA Issue Tracker | 5 months ago | Steve Loughran
    org.apache.hadoop.fs.s3a.AWSS3IOException: purging multipart uploads on landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended Request ID: uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B)
  2. 0

    It'd be easier to work out what's happening when s3a calls against different endpoints fail if the 301 exceptions forwarded up included the URL in the header which (presumably) says where things moved to

    Apache's JIRA Issue Tracker | 5 months ago | Steve Loughran
    org.apache.hadoop.fs.s3a.AWSS3IOException: getFileStatus on s3a://landsat-pds/scene_list.gz: com.amazonaws.services.s3.model.AmazonS3Exception: Moved Permanently (Service: Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 68FA0CDF257E8E88), S3 Extended Request ID: ZrrcD+fiQ/H3B945SNuLm69beK1rfa/CcL0Zwg6QN+837odqsH067OM2ctBZ8qRlaUqnQ3brl7U=: Moved Permanently (Service: Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 68FA0CDF257E8E88)
  3. 0

    It'd be easier to work out what's happening when s3a calls against different endpoints fail if the 301 exceptions forwarded up included the URL in the header which (presumably) says where things moved to

    Apache's JIRA Issue Tracker | 5 months ago | Steve Loughran
    org.apache.hadoop.fs.s3a.AWSS3IOException: getFileStatus on s3a://landsat-pds/scene_list.gz: com.amazonaws.services.s3.model.AmazonS3Exception: Moved Permanently (Service: Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 68FA0CDF257E8E88), S3 Extended Request ID: ZrrcD+fiQ/H3B945SNuLm69beK1rfa/CcL0Zwg6QN+837odqsH067OM2ctBZ8qRlaUqnQ3brl7U=: Moved Permanently (Service: Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 68FA0CDF257E8E88)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.hadoop.fs.s3a.AWSS3IOException

      purging multipart uploads on landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended Request ID: uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B)

      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse()
    2. AWS SDK for Java - Core
      AmazonHttpClient.execute
      1. com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
      2. com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
      3. com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
      4. com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
      4 frames
    3. AWS Java SDK for Amazon S3
      TransferManager.abortMultipartUploads
      1. com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
      2. com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
      3. com.amazonaws.services.s3.AmazonS3Client.listMultipartUploads(AmazonS3Client.java:2796)
      4. com.amazonaws.services.s3.transfer.TransferManager.abortMultipartUploads(TransferManager.java:1217)
      4 frames
    4. Apache Hadoop Amazon Web Services support
      S3AFileSystem.initialize
      1. org.apache.hadoop.fs.s3a.S3AFileSystem.initMultipartUploads(S3AFileSystem.java:454)
      2. org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:289)
      2 frames
    5. Hadoop
      FileSystem.newInstance
      1. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2715)
      2. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:96)
      3. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2749)
      4. org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2737)
      5. org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:430)
      5 frames
    6. org.apache.hadoop
      TestS3AInputStreamPerformance.openFS
      1. org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.bindS3aFS(TestS3AInputStreamPerformance.java:93)
      2. org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.openFS(TestS3AInputStreamPerformance.java:81)
      2 frames