nastra commented on code in PR #14767:
URL: https://github.com/apache/iceberg/pull/14767#discussion_r2681206702


##########
core/src/main/java/org/apache/iceberg/rest/RESTTableScan.java:
##########
@@ -151,6 +175,21 @@ private CloseableIterable<FileScanTask> 
planTableScan(PlanTableScanRequest planT
 
     this.planId = response.planId();
     PlanStatus planStatus = response.planStatus();
+    List<Credential> storageCredentials = response.credentials();

Review Comment:
   > One concrete use case i can think of is
   > see by default catalog vends creds for 2 hours so if the server wants to 
clean up resources (let say it cached stuff) of the plan post 2 hours it can do 
that but if the client comes back to catalog post 2 hours for the credential 
call and doesn't present the context it has i,e the plan-id server has no clue 
of when to release resource
   
   In that use case I'm assuming we're using the table's FileIO and not e.g. 
sub-scoped credentials. In that case this would be a similar flow to when using 
normal table credentials, which are then being asked to be refreshed after X 
hours. It's not like the server would have to keep additional state for such 
"normal" table credentials if they aren't sub-scoped/tied to a particular 
planId. That's why I'm not seeing an issue with that use case and why I'm 
proposing to not send the planId for "normal" table credentials. Tying those 
table credentials to a planId when it's not necessary is an indication to the 
server to keep those around, which I don't see any reason to do atm as 
described above, but maybe I'm missing something?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to