bilaharith commented on a change in pull request #1872: Hadoop 16890: Change in 
expiry calculation for MSI token provider
URL: https://github.com/apache/hadoop/pull/1872#discussion_r389902365
 
 

 ##########
 File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
 ##########
 @@ -408,17 +409,29 @@ private static AzureADToken 
parseTokenFromStream(InputStream httpResponseStream)
           if (fieldName.equals("access_token")) {
             token.setAccessToken(fieldValue);
           }
+
           if (fieldName.equals("expires_in")) {
-            expiryPeriod = Integer.parseInt(fieldValue);
+            expiryPeriodInSecs = Integer.parseInt(fieldValue);
+          }
+
+          if (fieldName.equals("expires_on")) {
+            expiresOnInSecs = Long.parseLong(fieldValue);
           }
+
         }
         jp.nextToken();
       }
       jp.close();
-      long expiry = System.currentTimeMillis();
-      expiry = expiry + expiryPeriod * 1000L; // convert expiryPeriod to 
milliseconds and add
-      token.setExpiry(new Date(expiry));
-      LOG.debug("AADToken: fetched token with expiry " + 
token.getExpiry().toString());
+      if (expiresOnInSecs > -1) {
+        token.setExpiry(new Date(expiresOnInSecs * 1000));
+      } else {
+        long expiry = System.currentTimeMillis();
 
 Review comment:
   Though MSI team confirmed faulty expires_in it still is valid for other 
flows and MSI response will defenitly contain the expires_in field

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to