steveloughran commented on code in PR #7802:
URL: https://github.com/apache/hadoop/pull/7802#discussion_r2207506503


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AWSCredentialProviderList.java:
##########
@@ -197,7 +197,17 @@ public AwsCredentials resolveCredentials() {
       } catch (SdkException e) {
         lastException = e;
         LOG.debug("No credentials provided by {}: {}",
-            provider, e.toString(), e);
+            provider, e);
+      } catch (Exception e) {
+        // convert any other exception into SDKException.
+        // This is required because some credential provider like
+        // WebIdentityTokenFileCredentialsProvider might throw
+        // exceptions other than SdkException.
+        if (e.getMessage() != null) {
+          lastException = SdkException.create(e.getMessage(), e);
+        }
+        LOG.debug("No credentials provided by {}: {}",

Review Comment:
   sl4j always converts the last argument to a stack trace if its a throwable, 
which is why there's that 
   
   ```
           LOG.debug("No credentials provided by {}: {}",
               provider, e.toString(), e);
   ```
   
   sequence. 
   
   IF you do want to drop the toString value, losing it down the log, then you 
shoudl cut the no longer needed {}
   ```
    LOG.debug("No credentials provided by {}",
               provider, e);
   ```
   but since I do want that string, best to leave it as it was when I wrote it. 
nested traces get so long that the lower levels can get lost. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to