[ 
https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939462#comment-15939462
 ] 

Mingliang Liu edited comment on HADOOP-14226 at 3/24/17 5:45 PM:
-----------------------------------------------------------------

Tested: us-west-1.

I manually checked the test and the data was cleaned up successfully w/ this 
patch.


was (Author: liuml07):
I manually checked the test and the data was cleaned up successfully w/ this 
patch.

> S3Guard: ITestDynamoDBMetadataStoreScale is not cleaning up test data
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-14226
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14226
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: HADOOP-13345
>            Reporter: Mingliang Liu
>            Assignee: Mingliang Liu
>         Attachments: HADOOP-14226-HADOOP-13345.000.patch
>
>
> After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not 
> cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the 
> finally clause though. The reason is that, the internally called method 
> {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an 
> item for the parent dest path:
> {code}
> parent=/fake-bucket, child=moved-here, is_dir=true
> {code}
> In DynamoDBMetadataStore implementation, we assume that _if a path exists, 
> all its ancestors will also exist in the table_. We need to pre-create dest 
> path to maintain this invariant so that test data can be cleaned up 
> successfully.
> I think there may be other tests with the same problem. Let's 
> identify/address them separately.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to