[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16838753#comment-16838753
 ] 

Gabor Bota edited comment on HADOOP-16279 at 5/13/19 5:56 PM:
--------------------------------------------------------------

Thank you for reviewing this [~fabbri]!

With HADOOP-15999 we changed a bit how authoritative mode works. In 
{{S3AFileSystem#innerGetFileStatus}} if {{allowAuthoritative}} is false then it 
will send a HEAD request - so query S3 for the {{S3AFileStatus}} and compare 
the mod_time. If it's newer than we update the MS and return the newer metadata.

With the current issue, I want to add the behavior to query S3 if a defined TTL 
for the metadata is expired in the MS. In my PR I use the same TTL for 
authoritative directories and all other entries. This is a simplification 
because you don't have to deal with 2 different parameters for expiry.

----

_| why we need more prune() functions added to the MS interface_
That prune is for removing expired entries from the ddbms. It uses 
{{last_updated}} for expiry rather than {{mod_time}}.
The main reason to add this is that for some operation (e.g directory rename 
with lots of files ([~ste...@apache.org])) it can significantly improve 
performance if we remove all expired tombstone entries before the operation. 
Maybe a better solution for this to add an enum to the current prune's 
parameter to select the internal prune query for the MSs?


_| So the patch made sense except the naming and description of the 
configuration parameter_
The {{MetadataStore should be authoritative to use this configuration knob.}} 
should be removed from the configuration. It was my mistake that I left it 
there, and not intended.


_| LocalMetadataStore use of guava Cache meant the work was already done there?_
This is an expiry on a different level. I don't use the DB's expiry in dynamo, 
so I think we should do the same for {{LocalMetadataStore}}.


 _| smarter logic that allows you set a policy for handling S3 versus MS 
conflicts_
So basically what you mean is to add a conflict resolution algorithm when an 
entry is expired? That would be interesting, I can think of three solutions:
* skip checking the expiry, so TTL metadata expiry is turned off
* return null on expiry from the MS, so we will get the fresh metadata from S3 
and update the MS
* check if mod_time is higher on MS vs S3, and then return the newer one


----

I think this expiry and authoritative directory listing could still live next 
to each other:
* The dir listing can be authoritative if we get the full listing, but not 
otherwise. This is good, so we don't have to query a dir listing, the 
performance improvement is there
* The dir can lose authoritativeness in certain circumstances (like in 
{{org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore#removeAuthoritativeFromParent}})
* {{org.apache.hadoop.fs.s3a.s3guard.S3Guard#listChildrenWithTtl}} will only 
mean anything if a dir listing's auth is expired *during* an operation (and not 
start of a request), which is very unlikely. 
* but if we add policies like "{{TTL metadata expiry is turned off}}", then the 
expiry of auth dirs will still have a meaning, and we don't have to add another 
config parameter.




was (Author: gabor.bota):
Thank you for reviewing this [~fabbri]!

With HADOOP-15999 we changed a bit how authoritative mode works. In 
{{S3AFileSystem#innerGetFileStatus}} if {{allowAuthoritative}} is false then it 
will send a HEAD request - so query S3 for the {{S3AFileStatus}} and compare 
the mod_time. If it's newer than we update the MS and return the newer metadata.

With the current issue, I want to add the behavior to query S3 if a defined TTL 
for the metadata is expired in the MS. In my PR I use the same TTL for 
authoritative directories and all other entries. This is a simplification 
because you don't have to deal with 2 different parameters for expiry.

----

_| why we need more prune() functions added to the MS interface_
That prune is for removing expired entries from the ddbms. It uses 
{{last_updated}} for expiry rather than {{mod_time}}.
The main reason to add this is that for some operation (e.g directory rename 
with lots of files ([~ste...@apache.org])) it can significantly improve 
performance if we remove all expired tombstone entries before the operation. 
Maybe a better solution for this to add an enum to the parameter to select the 
internal prune query for the MSs?


_| So the patch made sense except the naming and description of the 
configuration parameter_
The {{MetadataStore should be authoritative to use this configuration knob.}} 
should be removed from the configuration. It was my mistake that I left it 
there, and not intended.


_| LocalMetadataStore use of guava Cache meant the work was already done there?_
This is an expiry on a different level. I don't use the DB's expiry in dynamo, 
so I think we should do the same for {{LocalMetadataStore}}.


 _| smarter logic that allows you set a policy for handling S3 versus MS 
conflicts_
So basically what you mean is to add a conflict resolution algorithm when an 
entry is expired? That would be interesting, I can think of three solutions:
* skip checking the expiry, so TTL metadata expiry is turned off
* return null on expiry from the MS, so we will get the fresh metadata from S3 
and update the MS
* check if mod_time is higher on MS vs S3, and then return the newer one


----

I think this expiry and authoritative directory listing could still live next 
to each other:
* The dir listing can be authoritative if we get the full listing, but not 
otherwise. This is good, so we don't have to query a dir listing, the 
performance improvement is there
* The dir can lose authoritativeness in certain circumstances (like in 
{{org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore#removeAuthoritativeFromParent}})
* {{org.apache.hadoop.fs.s3a.s3guard.S3Guard#listChildrenWithTtl}} will only 
mean anything if a dir listing's auth is expired *during* an operation (and not 
start of a request), which is very unlikely. 
* but if we add policies like "{{TTL metadata expiry is turned off}}", then the 
expiry of auth dirs will still have a meaning, and we don't have to add another 
config parameter.



> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-16279
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16279
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Gabor Bota
>            Assignee: Gabor Bota
>            Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> ----
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behavior than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be 
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
> to MetadataStore interface. These methods will delete all expired metadata 
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative 
> directory expiry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to