[ https://issues.apache.org/jira/browse/HADOOP-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847848#comment-17847848 ]
Steve Loughran commented on HADOOP-18990: ----------------------------------------- noticed that an AWS product (greegrass) has implemented their recovery for this through special handling of 400 + error text scan. ugly, but clearly what we will have to consider too. > S3A: retry on credential expiry > ------------------------------- > > Key: HADOOP-18990 > URL: https://issues.apache.org/jira/browse/HADOOP-18990 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.4.0 > Reporter: Steve Loughran > Priority: Major > > Reported in AWS SDK https://github.com/aws/aws-sdk-java-v2/issues/3408 > bq. In RetryableStage execute method, the "AwsCredentails" does not attempt > to renew if it has expired. Therefore, if a method called with the existing > credential is expiring soon, the number of retry is less than intended due to > the expiration of the credential. > The stack from this report doesn't show any error detail we can use to > identify the 400 exception as something we should be retrying on. This could > be due to the logging, or it could actually hold. we've have to generate some > socket credentials, let them expire and then see how hadoop fs commands > failed. Something to do by hand as an STS test to do this is probably slow. > *unless we expire all session credentials of a given role?*. Could be good, > would be traumatic for other test runs though. > {code} > software.amazon.awssdk.services.s3.model.S3Exception: The provided token has > expired. (Service: S3, Status Code: 400, Request ID: 3YWKVBNJPNTXPJX2, > Extended Request ID: > GkR56xA0r/Ek7zqQdB2ZdP3wqMMhf49HH7hc5N2TAIu47J3HEk6yvSgVNbX7ADuHDy/Irhr2rPQ=) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org