[jira] [Updated] (HDFS-16059) dfsadmin -listOpenFiles -blockingDecommission can miss some files

2021-11-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-16059:

Attachment: HDFS-16059-WIP-01-1.patch

> dfsadmin -listOpenFiles -blockingDecommission can miss some files
> -
>
> Key: HDFS-16059
> URL: https://issues.apache.org/jira/browse/HDFS-16059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsadmin
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-16059-WIP-01-1.patch, 
> HDFS-16059-regression-test.patch
>
>
> While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles 
> -blockingDecommission" can drop some files.
> [https://github.com/apache/hadoop/pull/3065#discussion_r647396463]
> {quote}If the DataNodes have the following open files and we want to list all 
> the open files:
> DN1: [1001, 1002, 1003, ... , 2000]
>  DN2: [1, 2, 3, ... , 1000]
> At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, 
> ... , 2000] because it reached max size (=1000), and next 
> getFilesBlockingDecom(2000, "/") is called because the last inode Id of the 
> previous result is 2000. That way the open files of DN2 is missed
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16059) dfsadmin -listOpenFiles -blockingDecommission can miss some files

2021-11-09 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-16059:

Attachment: (was: HDFS-16059-WIP-01.patch)

> dfsadmin -listOpenFiles -blockingDecommission can miss some files
> -
>
> Key: HDFS-16059
> URL: https://issues.apache.org/jira/browse/HDFS-16059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsadmin
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-16059-WIP-01-1.patch, 
> HDFS-16059-regression-test.patch
>
>
> While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles 
> -blockingDecommission" can drop some files.
> [https://github.com/apache/hadoop/pull/3065#discussion_r647396463]
> {quote}If the DataNodes have the following open files and we want to list all 
> the open files:
> DN1: [1001, 1002, 1003, ... , 2000]
>  DN2: [1, 2, 3, ... , 1000]
> At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, 
> ... , 2000] because it reached max size (=1000), and next 
> getFilesBlockingDecom(2000, "/") is called because the last inode Id of the 
> previous result is 2000. That way the open files of DN2 is missed
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16059) dfsadmin -listOpenFiles -blockingDecommission can miss some files

2021-10-31 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-16059:

Attachment: HDFS-16059-WIP-01.patch

> dfsadmin -listOpenFiles -blockingDecommission can miss some files
> -
>
> Key: HDFS-16059
> URL: https://issues.apache.org/jira/browse/HDFS-16059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsadmin
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-16059-WIP-01.patch, HDFS-16059-regression-test.patch
>
>
> While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles 
> -blockingDecommission" can drop some files.
> [https://github.com/apache/hadoop/pull/3065#discussion_r647396463]
> {quote}If the DataNodes have the following open files and we want to list all 
> the open files:
> DN1: [1001, 1002, 1003, ... , 2000]
>  DN2: [1, 2, 3, ... , 1000]
> At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, 
> ... , 2000] because it reached max size (=1000), and next 
> getFilesBlockingDecom(2000, "/") is called because the last inode Id of the 
> previous result is 2000. That way the open files of DN2 is missed
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16059) dfsadmin -listOpenFiles -blockingDecommission can miss some files

2021-06-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16059:
-
Attachment: HDFS-16059-regression-test.patch

> dfsadmin -listOpenFiles -blockingDecommission can miss some files
> -
>
> Key: HDFS-16059
> URL: https://issues.apache.org/jira/browse/HDFS-16059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsadmin
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-16059-regression-test.patch
>
>
> While reviewing HDFS-13671, I found "dfsadmin -listOpenFiles 
> -blockingDecommission" can drop some files.
> [https://github.com/apache/hadoop/pull/3065#discussion_r647396463]
> {quote}If the DataNodes have the following open files and we want to list all 
> the open files:
> DN1: [1001, 1002, 1003, ... , 2000]
>  DN2: [1, 2, 3, ... , 1000]
> At first getFilesBlockingDecom(0, "/") is called and it returns [1001, 1002, 
> ... , 2000] because it reached max size (=1000), and next 
> getFilesBlockingDecom(2000, "/") is called because the last inode Id of the 
> previous result is 2000. That way the open files of DN2 is missed
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org