[ 
https://issues.apache.org/jira/browse/HDFS-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16048583#comment-16048583
 ] 

Manoj Govindassamy commented on HDFS-11912:
-------------------------------------------

[~ghuangups], Thanks for working on this. Looks good overall. Few comments and 
questions below.
1. Operation type, just like weight can be part of the constructor and we can 
avoid checking the value string starting with to identify the operation and get 
random operations.
{noformat}
 private enum Operations {
    FileSystem_CreateFile(2 /*operation weight*/),
    FileSystem_DeleteFile(2),
    FileSystem_RenameFile(2),
    ... ..
    private static int sumWeights(OperationType type) {
       ..
        if (value.name().startsWith(type.name())) {
{noformat}

2. Random operation initialization in line 153 and line 157 are not needed.
3. In {{createFile}}, line 616, the loop limit says TOTAL_BLOCKS, but it is 
infact treated like max file count.
4. In {{createFile}}, line 607, some top level directory might have been set as 
snapshotable earlier. Later when you enable snapshot for a nested directory 
under it, the call can fail with below exception. Are you ignoring the 
exception?
{noformat}
      if (s.isAncestorDirectory(dir)) {
        throw new SnapshotException(
            "Nested snapshottable directories not allowed: path=" + path
            + ", the subdirectory " + s.getFullPathName()
            + " is already a snapshottable directory.");
      }
      if (dir.isAncestorDirectory(s)) {
        throw new SnapshotException(
            "Nested snapshottable directories not allowed: path=" + path
            + ", the ancestor " + s.getFullPathName()
            + " is already a snapshottable directory.");
      }
{noformat}
5. Line 280, we can assert here. Otherwise, there is an issue with creating 
random operation.
6. Line 304, same here.
7. Line 318, we can add a unique number suffix to the created directory so as 
to differentiate where the failure happened
8. Line 366, rename dir suffix can also take a unique number for 
differentiation.
9. Line 551, 552: Aren't FileStatus[] from listStatus already sorted ? can you 
please check?



> Add a snapshot unit test with randomized file IO operations
> -----------------------------------------------------------
>
>                 Key: HDFS-11912
>                 URL: https://issues.apache.org/jira/browse/HDFS-11912
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: hdfs
>            Reporter: George Huang
>            Assignee: George Huang
>            Priority: Minor
>         Attachments: HDFS-11912.001.patch, HDFS-11912.002.patch
>
>
> Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to