org.apache.hadoop.fs.s3a.AwsS3IOException: purging multipart uploadson cnauroth-test-aws-s3a: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908), S3 Extended Request ID: i38YD4/pNstx3Wjddju8/+fTKwFuHSBDIh5fHxn9HKtye2Lr1USYeHALVbvJoEa1EtMP4xz3wHA=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908)

Apache's JIRA Issue Tracker | Steve Loughran | 7 months ago
  1. 0

    Issues - [jira] [Commented] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

    nabble.com | 6 months ago
    org.apache.hadoop.fs.s3a.AwsS3IOException: purging multipart uploadson cnauroth-test-aws-s3a: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908), S3 Extended Request ID: i38YD4/pNstx3Wjddju8/+fTKwFuHSBDIh5fHxn9HKtye2Lr1USYeHALVbvJoEa1EtMP4xz3wHA=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908)
  2. 0

    S3A failures happening in the AWS library surface as {{AmazonClientException}} derivatives, rather than IOEs. As the amazon exceptions are runtime exceptions, any code which catches IOEs for error handling breaks. The fix will be to catch and wrap. The hard thing will be to wrap it with meaningful exceptions rather than a generic IOE. Furthermore, if anyone has been catching AWS exceptions, they are going to be disappointed. That means that fixing this situation could be considered "incompatible" —but only for code which contains assumptions about the underlying FS and the exceptions they raise.

    Apache's JIRA Issue Tracker | 7 months ago | Steve Loughran
    org.apache.hadoop.fs.s3a.AwsS3IOException: purging multipart uploadson cnauroth-test-aws-s3a: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908), S3 Extended Request ID: i38YD4/pNstx3Wjddju8/+fTKwFuHSBDIh5fHxn9HKtye2Lr1USYeHALVbvJoEa1EtMP4xz3wHA=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908)

    Root Cause Analysis

    1. org.apache.hadoop.fs.s3a.AwsS3IOException

      purging multipart uploadson cnauroth-test-aws-s3a: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908), S3 Extended Request ID: i38YD4/pNstx3Wjddju8/+fTKwFuHSBDIh5fHxn9HKtye2Lr1USYeHALVbvJoEa1EtMP4xz3wHA=: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 8FE330E9D3BFA908)

      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse()
    2. AWS SDK for Java - Core
      AmazonHttpClient.execute
      1. com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
      2. com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
      3. com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
      4. com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
      4 frames
    3. AWS Java SDK for Amazon S3
      TransferManager.abortMultipartUploads
      1. com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
      2. com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
      3. com.amazonaws.services.s3.AmazonS3Client.listMultipartUploads(AmazonS3Client.java:2796)
      4. com.amazonaws.services.s3.transfer.TransferManager.abortMultipartUploads(TransferManager.java:1217)
      4 frames
    4. Apache Hadoop Amazon Web Services support
      TestS3AConfiguration.shouldBeAbleToSwitchOnS3PathStyleAccessViaConfigProperty
      1. org.apache.hadoop.fs.s3a.S3AFileSystem.initMultipartUploads(S3AFileSystem.java:417)
      2. org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:274)
      3. org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:53)
      4. org.apache.hadoop.fs.s3a.TestS3AConfiguration.shouldBeAbleToSwitchOnS3PathStyleAccessViaConfigProperty(TestS3AConfiguration.java:375)
      4 frames