[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841112#comment-16841112
 ] 

Gabor Bota edited comment on HADOOP-16279 at 5/16/19 12:20 PM:
---------------------------------------------------------------

{quote}You might be able to resolve HADOOP-14468 then.{quote}
I just commented on the issue. I will resolve it soon if no one has any further 
comments on it.

{quote}
AF> why we need more prune() functions added to the MS interface
GB> That prune is for removing expired entries from the ddbms. It uses 
last_updated for expiry rather than mod_time.
AF>  It seems like an internal implementation detail that doesn't need to be 
exposed.
{quote}
True. It is an internal impl question. I think we could even go with merging 
the two list internally: so we would prune with last_updated.

{quote}
AF> Can we claim that last_updated (metastore write time) >= mod_time?
{quote}
Sure we can. Whenever we access a file's metadata (eg. do a HEAD or a GET) and 
the file already existed on S3, the {{last_updated}} field will be updated to 
the current time, but the mod_time will be what's in the file descriptor in S3. 
This is a very important detail, this is the reason why we use a different 
field in the first place for TTL in auth dirs. It tells how fresh the metadata 
is.

{quote}
AF> smarter logic that allows you set a policy for handling S3 versus MS 
conflicts
GB> So basically what you mean is to add a conflict resolution algorithm when 
an entry is expired?
AF> Not so much when entry is expired, but when data from S3 conflicts with 
data from MS. For example, MS has tombstone but S3 says file exists.
{quote}
I would say this is out of scope for this issue. We would like to solve only 
the metadata expiry with this, and not add policies for conflict resolution.



was (Author: gabor.bota):
{quote}You might be able to resolve HADOOP-14468 then.{quote}
I just commented on the issue. I will resolve it soon if no one has any further 
comments on it.

{quote}
AF> why we need more prune() functions added to the MS interface
GB> That prune is for removing expired entries from the ddbms. It uses 
last_updated for expiry rather than mod_time.
AF>  It seems like an internal implementation detail that doesn't need to be 
exposed.
{quote}
True. It is an internal impl question. I think we could even go with merging 
the two list internally: so we would prune with mod_time and last_updated.

{quote}
AF> Can we claim that last_updated (metastore write time) >= mod_time?
{quote}
Sure we can. Whenever we access a file's metadata (eg. do a HEAD or a GET) and 
the file already existed on S3, the {{last_updated}} field will be updated to 
the current time, but the mod_time will be what's in the file descriptor in S3. 
This is a very important detail, this is the reason why we use a different 
field in the first place for TTL in auth dirs. It tells how fresh the metadata 
is.

{quote}
AF> smarter logic that allows you set a policy for handling S3 versus MS 
conflicts
GB> So basically what you mean is to add a conflict resolution algorithm when 
an entry is expired?
AF> Not so much when entry is expired, but when data from S3 conflicts with 
data from MS. For example, MS has tombstone but S3 says file exists.
{quote}
I would say this is out of scope for this issue. We would like to solve only 
the metadata expiry with this, and not add policies for conflict resolution.


> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-16279
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16279
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Gabor Bota
>            Assignee: Gabor Bota
>            Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> ----
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behavior than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be 
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
> to MetadataStore interface. These methods will delete all expired metadata 
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative 
> directory expiry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to