mackrorysd commented on a change in pull request #1009: HADOOP-16383. Pass 
ITtlTimeProvider instance in initialize method in …
URL: https://github.com/apache/hadoop/pull/1009#discussion_r299563868
 
 

 ##########
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
 ##########
 @@ -377,11 +377,12 @@ private DynamoDB createDynamoDB(
    * FS via {@link S3AFileSystem#shareCredentials(String)}; this will
    * increment the reference counter of these credentials.
    * @param fs {@code S3AFileSystem} associated with the MetadataStore
+   * @param ttlTimeProvider
    * @throws IOException on a failure
    */
   @Override
   @Retries.OnceRaw
-  public void initialize(FileSystem fs) throws IOException {
+  public void initialize(FileSystem fs, ITtlTimeProvider ttlTimeProvider) 
throws IOException {
 
 Review comment:
   Discussed offline with Gabor. Outcome of that conversation: 
bindToOwnerFileSystem doesn't exist everywhere and there isn't already a 
context created outside of the context (ha!) of certain operations. But we 
should have a context created earlier since it doesn't contain state that 
changes between operations (I actually wonder why we're creating a new instance 
for every operation instead of the metadatastore getting a permanent context). 
We need to check the context is complete enough, as this is called during FS 
initialization, precisely when the createStoreContext() javadoc warns you to be 
careful :) 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to