[jira] [Commented] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16017668#comment-16017668 ] Xiaobing Zhou commented on HDFS-11803: -- v002 fixed the test failure and addressed your review, thanks [~vagarychen]. > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch, HDFS-11803.001.patch, > HDFS-11803.002.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Attachment: HDFS-11803.002.patch > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch, HDFS-11803.001.patch, > HDFS-11803.002.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011525#comment-16011525 ] Xiaobing Zhou commented on HDFS-11803: -- v1 fixes documentation. > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch, HDFS-11803.001.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Attachment: HDFS-11803.001.patch > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch, HDFS-11803.001.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16011465#comment-16011465 ] Xiaobing Zhou commented on HDFS-11803: -- Posted a patch for review. > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Status: Patch Available (was: Open) > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Attachment: HDFS-11803.000.patch > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11803.000.patch > > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Component/s: hdfs > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Description: Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent {noformat} SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G/tmp/parent {noformat} $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G21 1 G /tmp/parent {noformat} was: Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent {noformat} SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G /tmp/parent {noformat} $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G2 11 G /tmp/parent {noformat} > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G/tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G21 > 1 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Description: Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent {noformat} SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G /tmp/parent {noformat} $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G2 11 G /tmp/parent {noformat} was: Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G /tmp/parent $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G2 11 G /tmp/parent {noformat} > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G /tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G2 >11 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11803) Add -v option for du command to show header line
[ https://issues.apache.org/jira/browse/HDFS-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11803: - Description: Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent {noformat} SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G /tmp/parent {noformat} $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G2 11 G /tmp/parent {noformat} was: Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent {noformat} SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G /tmp/parent {noformat} $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G2 11 G /tmp/parent {noformat} > Add -v option for du command to show header line > > > Key: HDFS-11803 > URL: https://issues.apache.org/jira/browse/HDFS-11803 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > Like hdfs -count command, it's better to add -v for du command to show header > line. > Without -v, > $ hdfs -du -h -s /tmp/parent > {noformat} > 1 G 1 G /tmp/parent > {noformat} > With -v, > $ hdfs -du -h -s -v /tmp/parent > {noformat} > SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME > 1 G 1 G > /tmp/parent > {noformat} > $ hdfs dfs -count -q -v -h -x /tmp/parent > {noformat} > QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT > FILE_COUNT CONTENT_SIZE PATHNAME > 10 750 G49 G2 >11 G /tmp/parent > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11803) Add -v option for du command to show header line
Xiaobing Zhou created HDFS-11803: Summary: Add -v option for du command to show header line Key: HDFS-11803 URL: https://issues.apache.org/jira/browse/HDFS-11803 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Like hdfs -count command, it's better to add -v for du command to show header line. Without -v, $ hdfs -du -h -s /tmp/parent {noformat} 1 G 1 G /tmp/parent {noformat} With -v, $ hdfs -du -h -s -v /tmp/parent SIZE DISK_SPACE_CONSUMED_WITH_ALL_REPLICAS FULL_PATH_NAME 1 G 1 G /tmp/parent $ hdfs dfs -count -q -v -h -x /tmp/parent {noformat} QUOTA REM_QUOTA SPACE_QUOTA REM_SPACE_QUOTADIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 10 750 G49 G2 11 G /tmp/parent {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots
[ https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005390#comment-16005390 ] Xiaobing Zhou commented on HDFS-8986: - [~xiaochen] could you explain why The -x option is ignored if -u or -q option is given? Thx. > Add option to -du to calculate directory space usage excluding snapshots > > > Key: HDFS-8986 > URL: https://issues.apache.org/jira/browse/HDFS-8986 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Reporter: Gautam Gopalakrishnan >Assignee: Xiao Chen > Labels: supportability > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8986.01.patch, HDFS-8986.02.patch, > HDFS-8986.03.patch, HDFS-8986.04.patch, HDFS-8986.05.patch, > HDFS-8986.06.patch, HDFS-8986.07.patch, HDFS-8986.branch-2.patch > > > When running {{hadoop fs -du}} on a snapshotted directory (or one of its > children), the report includes space consumed by blocks that are only present > in the snapshots. This is confusing for end users. > {noformat} > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -createSnapshot /tmp/parent snap1 > Created snapshot /tmp/parent/.snapshot/snap1 > $ hadoop fs -rm -skipTrash /tmp/parent/sub1/* > ... > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -deleteSnapshot /tmp/parent snap1 > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 0 0 /tmp/parent > 0 0 /tmp/parent/sub1 > {noformat} > It would be helpful if we had a flag, say -X, to exclude any snapshot related > disk usage in the output -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11800) Document output of 'hdfs count -u' should contain PATHNAME
[ https://issues.apache.org/jira/browse/HDFS-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11800: - Priority: Minor (was: Major) > Document output of 'hdfs count -u' should contain PATHNAME > -- > > Key: HDFS-11800 > URL: https://issues.apache.org/jira/browse/HDFS-11800 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Minor > Labels: docuentation > Attachments: HDFS-11800.000.patch > > > In this doc [hdfs > count|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#count], > it's been said > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA > It should be documented as > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA, PATHNAME -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11800) Document output of 'hdfs count -u' should contain PATHNAME
[ https://issues.apache.org/jira/browse/HDFS-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11800: - Status: Patch Available (was: Open) > Document output of 'hdfs count -u' should contain PATHNAME > -- > > Key: HDFS-11800 > URL: https://issues.apache.org/jira/browse/HDFS-11800 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11800.000.patch > > > In this doc [hdfs > count|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#count], > it's been said > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA > It should be documented as > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA, PATHNAME -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11800) Document output of 'hdfs count -u' should contain PATHNAME
[ https://issues.apache.org/jira/browse/HDFS-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11800: - Attachment: HDFS-11800.000.patch Posted a patch for fix. > Document output of 'hdfs count -u' should contain PATHNAME > -- > > Key: HDFS-11800 > URL: https://issues.apache.org/jira/browse/HDFS-11800 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11800.000.patch > > > In this doc [hdfs > count|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#count], > it's been said > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA > It should be documented as > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA, PATHNAME -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11800) Document output of 'hdfs count -u' should contain PATHNAME
[ https://issues.apache.org/jira/browse/HDFS-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11800: - Labels: docuentation (was: ) > Document output of 'hdfs count -u' should contain PATHNAME > -- > > Key: HDFS-11800 > URL: https://issues.apache.org/jira/browse/HDFS-11800 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > > In this doc [hdfs > count|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#count], > it's been said > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA > It should be documented as > The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, > REMAINING_SPACE_QUOTA, PATHNAME -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11800) Document output of 'hdfs count -u' should contain PATHNAME
Xiaobing Zhou created HDFS-11800: Summary: Document output of 'hdfs count -u' should contain PATHNAME Key: HDFS-11800 URL: https://issues.apache.org/jira/browse/HDFS-11800 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou In this doc [hdfs count|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#count], it's been said The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, REMAINING_SPACE_QUOTA It should be documented as The output columns with -count -u are: QUOTA, REMAINING_QUOTA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, PATHNAME -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11593) Change SimpleHttpProxyHandler#exceptionCaught log level from info to debug
[ https://issues.apache.org/jira/browse/HDFS-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11593: - Status: Patch Available (was: Open) > Change SimpleHttpProxyHandler#exceptionCaught log level from info to debug > -- > > Key: HDFS-11593 > URL: https://issues.apache.org/jira/browse/HDFS-11593 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaoyu Yao >Assignee: Xiaobing Zhou >Priority: Minor > Labels: newbie > Attachments: HDFS-11593.000.patch > > > A busy datanode may have many client disconnect exception logged with stack > like below, which does not provide much useful information. Propose to reduce > the log level from info to debug. > {code} > 2017-03-29 20:28:55,225 INFO web.DatanodeHttpServer > (SimpleHttpProxyHandler.java:exceptionCaught(147)) - Proxy for / failed. > cause: > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) > at > io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:446) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11593) Change SimpleHttpProxyHandler#exceptionCaught log level from info to debug
[ https://issues.apache.org/jira/browse/HDFS-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15991573#comment-15991573 ] Xiaobing Zhou commented on HDFS-11593: -- Posted initial patch for review. > Change SimpleHttpProxyHandler#exceptionCaught log level from info to debug > -- > > Key: HDFS-11593 > URL: https://issues.apache.org/jira/browse/HDFS-11593 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaoyu Yao >Assignee: Xiaobing Zhou >Priority: Minor > Labels: newbie > Attachments: HDFS-11593.000.patch > > > A busy datanode may have many client disconnect exception logged with stack > like below, which does not provide much useful information. Propose to reduce > the log level from info to debug. > {code} > 2017-03-29 20:28:55,225 INFO web.DatanodeHttpServer > (SimpleHttpProxyHandler.java:exceptionCaught(147)) - Proxy for / failed. > cause: > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) > at > io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:446) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11593) Change SimpleHttpProxyHandler#exceptionCaught log level from info to debug
[ https://issues.apache.org/jira/browse/HDFS-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11593: - Attachment: HDFS-11593.000.patch > Change SimpleHttpProxyHandler#exceptionCaught log level from info to debug > -- > > Key: HDFS-11593 > URL: https://issues.apache.org/jira/browse/HDFS-11593 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaoyu Yao >Assignee: Xiaobing Zhou >Priority: Minor > Labels: newbie > Attachments: HDFS-11593.000.patch > > > A busy datanode may have many client disconnect exception logged with stack > like below, which does not provide much useful information. Propose to reduce > the log level from info to debug. > {code} > 2017-03-29 20:28:55,225 INFO web.DatanodeHttpServer > (SimpleHttpProxyHandler.java:exceptionCaught(147)) - Proxy for / failed. > cause: > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:192) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) > at > io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:446) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) > at > io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558-branch-2.8.006.patch Posted branch-2.8 patch. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch, > HDFS-11558-branch-2.8.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965089#comment-15965089 ] Xiaobing Zhou commented on HDFS-11558: -- Posted branch-2 patch. Thanks all. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558-branch-2.006.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed with block size greater than 2 GB
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15964731#comment-15964731 ] Xiaobing Zhou commented on HDFS-11608: -- Posted 2.7 patch. Thanks [~xyao] for committing it and all for reviews. > HDFS write crashed with block size greater than 2 GB > > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch, HDFS-11608.003.patch, HDFS-11608-branch-2.7.003.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws > out of memory exception. DataNode gives out IOException. After changing heap > size limit, DFSOutputStream ResponseProcessor exception is seen followed by > Broken pipe and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed with block size greater than 2 GB
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Attachment: HDFS-11608-branch-2.7.003.patch > HDFS write crashed with block size greater than 2 GB > > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Fix For: 2.9.0, 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch, HDFS-11608.003.patch, HDFS-11608-branch-2.7.003.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3 GB file using block size > 2 GB (e.g., 3 GB), HDFS client throws > out of memory exception. DataNode gives out IOException. After changing heap > size limit, DFSOutputStream ResponseProcessor exception is seen followed by > Broken pipe and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.006.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959699#comment-15959699 ] Xiaobing Zhou commented on HDFS-11558: -- Cleared, thanks. Posted v6. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959663#comment-15959663 ] Xiaobing Zhou commented on HDFS-11608: -- Posted v3 with fix setting base dir for newly created cluster to avoid conflicts of shared root dir. This resolved the failure. Thanks [~vagarychen] for the check. {code} dfsConf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath()); {code} > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch, HDFS-11608.003.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Attachment: HDFS-11608.003.patch > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch, HDFS-11608.003.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11362) Storage#shouldReturnNextDir should check for null dirType
[ https://issues.apache.org/jira/browse/HDFS-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958060#comment-15958060 ] Xiaobing Zhou commented on HDFS-11362: -- The patch looks good, thanks [~hkoneru]. > Storage#shouldReturnNextDir should check for null dirType > - > > Key: HDFS-11362 > URL: https://issues.apache.org/jira/browse/HDFS-11362 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HDFS-11362.000.patch > > > _Storage#shouldReturnNextDir_ method checks if the next Storage directory is > of the same type us dirType. > {noformat} > private boolean shouldReturnNextDir() { > StorageDirectory sd = getStorageDir(nextIndex); > return (dirType == null || sd.getStorageDirType().isOfType(dirType)) && > (includeShared || !sd.isShared()); > } > {noformat} > There is a possibility that sd.getStorageDirType() returns null (default > dirType is null). Hence, before checking for type match, we should make sure > that the value returned by sd.getStorageDirType() is not null. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11302) Improve Logging for SSLHostnameVerifier
[ https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958026#comment-15958026 ] Xiaobing Zhou commented on HDFS-11302: -- Thanks for the patch [~vagarychen]. LGTM, +1 non-binding. > Improve Logging for SSLHostnameVerifier > --- > > Key: HDFS-11302 > URL: https://issues.apache.org/jira/browse/HDFS-11302 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Reporter: Xiaoyu Yao >Assignee: Chen Liang > Attachments: HDFS-11302.001.patch > > > SSLHostnameVerifier interface/class was copied from other projects without > any logging to help troubleshooting SSL certificate related issues. For a > misconfigured SSL truststore, we may get some very confusing error message > like > {code} > >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt > ... > cause:java.io.IOException: DN2:50475: HTTPS hostname wrong: should be > cat: DN2:50475: HTTPS hostname wrong: should be > {code} > This ticket is opened to add tracing to give more useful context information > around SSL certificate verification failures inside the following code. > {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957824#comment-15957824 ] Xiaobing Zhou commented on HDFS-11628: -- v1 fixed that. Thanks [~liuml07] and [~arpitagarwal]. > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify the behavior of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11628: - Attachment: HDFS-11628.001.patch > Clarify the behavior of HDFS Mover in documentation > --- > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch, HDFS-11628.001.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11628: - Status: Patch Available (was: Open) > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
[ https://issues.apache.org/jira/browse/HDFS-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11628: - Attachment: HDFS-11628.000.patch Posted a patch. Will have a dry run to visualize the changes for sanity check. > Clarify behaviors of HDFS Mover in documentation > > > Key: HDFS-11628 > URL: https://issues.apache.org/jira/browse/HDFS-11628 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: docuentation > Attachments: HDFS-11628.000.patch > > > It's helpful to state that Mover always tries to move block replicas within > the same node whenever possible. If that is not possible (e.g. when a node > doesn’t have the target storage type) then it will copy the block replica to > another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11628) Clarify behaviors of HDFS Mover in documentation
Xiaobing Zhou created HDFS-11628: Summary: Clarify behaviors of HDFS Mover in documentation Key: HDFS-11628 URL: https://issues.apache.org/jira/browse/HDFS-11628 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou It's helpful to state that Mover always tries to move block replicas within the same node whenever possible. If that is not possible (e.g. when a node doesn’t have the target storage type) then it will copy the block replica to another node over the network. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956089#comment-15956089 ] Xiaobing Zhou commented on HDFS-11608: -- Posted v2. Thanks. Apologize for the formatting changes in a rush patch. Removed the check to constructor. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Attachment: HDFS-11608.002.patch > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch, > HDFS-11608.002.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955693#comment-15955693 ] Xiaobing Zhou edited comment on HDFS-11608 at 4/4/17 7:44 PM: -- Posted v1 to add some tests. Thanks [~xyao] was (Author: xiaobingo): Posted v1 to add some tests. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Attachment: HDFS-11608.001.patch Posted v1 to add some tests. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch, HDFS-11608.001.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953963#comment-15953963 ] Xiaobing Zhou edited comment on HDFS-11558 at 4/3/17 6:32 PM: -- v5 removed the constructor BPOfferService. Thanks [~szetszwo]. nameserviceId can be null for unit tests when cluster is booting when ns is not configured, in other words, nsToAdd will be null in BlockPoolManager#doRefreshNamenodes. The stack trace where getNameserviceId is called. {noformat} at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getNameserviceId(BPOfferService.java:183) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.formatThreadName(BPServiceActor.java:557) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.start(BPServiceActor.java:544) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.start(BPOfferService.java:301) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager$1.run(BlockPoolManager.java:129) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.startAll(BlockPoolManager.java:124) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.doRefreshNamenodes(BlockPoolManager.java:219) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:158) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1423) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2735) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2638) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) {noformat} was (Author: xiaobingo): v5 removed the constructor BPOfferService. nameserviceId can be null for unit tests when cluster is booting when ns is not configured, in other words, nsToAdd will be null in BlockPoolManager#doRefreshNamenodes. The stack trace where getNameserviceId is called. {noformat} at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getNameserviceId(BPOfferService.java:183) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.formatThreadName(BPServiceActor.java:557) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.start(BPServiceActor.java:544) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.start(BPOfferService.java:301) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager$1.run(BlockPoolManager.java:129) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.startAll(BlockPoolManager.java:124) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.doRefreshNamenodes(BlockPoolManager.java:219) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:158) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1423) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2735) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2638) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) {noformat} > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch,
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953963#comment-15953963 ] Xiaobing Zhou commented on HDFS-11558: -- v5 removed the constructor BPOfferService. nameserviceId can be null for unit tests when cluster is booting when ns is not configured, in other words, nsToAdd will be null in BlockPoolManager#doRefreshNamenodes. The stack trace where getNameserviceId is called. {noformat} at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getNameserviceId(BPOfferService.java:183) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.formatThreadName(BPServiceActor.java:557) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.start(BPServiceActor.java:544) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.start(BPOfferService.java:301) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager$1.run(BlockPoolManager.java:129) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.startAll(BlockPoolManager.java:124) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.doRefreshNamenodes(BlockPoolManager.java:219) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:158) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1423) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2735) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2638) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) {noformat} > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.005.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951793#comment-15951793 ] Xiaobing Zhou commented on HDFS-11608: -- You are right [~xyao], thanks, I've made the change in place. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951715#comment-15951715 ] Xiaobing Zhou edited comment on HDFS-11608 at 3/31/17 11:17 PM: After some debugging, it turns out it's related to Integer overflow. adjustChunkBoundary casts long to int in Math.min, resulting in one overflow (i.e. psize == -2147483648). Moreover, with the changes of computePacketChunkSize in HDFS-7308, (psize - PacketHeader.PKT_MAX_HEADER_LEN) leads to another overflow (i.e. bodySize is 2147483615 as a result of (-2147483648 - 33)), so chunksPerPacket == 4161789, packetSize == 516 * 4161789 == 2147483124, finally causing out-of-mem and invalid payload issues. Note that without HDFS-7308, Math.max(psize/chunkSize, 1) won't have another overflow, it gives out 1 which is good. the code in HDFS-7308 {code} private void computePacketChunkSize(int psize, int csize) { +final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN; final int chunkSize = csize + getChecksumSize(); -chunksPerPacket = Math.max(psize/chunkSize, 1); +chunksPerPacket = Math.max(bodySize/chunkSize, 1); {code} DFSOutputStream#adjustChunkBoundary {code} if (!getStreamer().getAppendChunk()) { int psize = Math.min((int)(blockSize- getStreamer().getBytesCurBlock()), dfsClient.getConf().getWritePacketSize()); computePacketChunkSize(psize, bytesPerChecksum); } {code} was (Author: xiaobingo): After some debugging, it turns out it's related to Integer overflow. adjustChunkBoundary casts long to int in Math.min, resulting in one overflow (i.e. psize == -2147483648). Moreover, with the changes of computePacketChunkSize in HDFS-7308, (psize - PacketHeader.PKT_MAX_HEADER_LEN) leads to another overflow (i.e. bodySize is 2147483615 as a result of (2147483648 - 33)), so chunksPerPacket == 4161789, packetSize == 516 * 4161789 == 2147483124, finally causing out-of-mem and invalid payload issues. Note that without HDFS-7308, Math.max(psize/chunkSize, 1) won't have another overflow, it gives out 1 which is good. the code in HDFS-7308 {code} private void computePacketChunkSize(int psize, int csize) { +final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN; final int chunkSize = csize + getChecksumSize(); -chunksPerPacket = Math.max(psize/chunkSize, 1); +chunksPerPacket = Math.max(bodySize/chunkSize, 1); {code} DFSOutputStream#adjustChunkBoundary {code} if (!getStreamer().getAppendChunk()) { int psize = Math.min((int)(blockSize- getStreamer().getBytesCurBlock()), dfsClient.getConf().getWritePacketSize()); computePacketChunkSize(psize, bytesPerChecksum); } {code} > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) -
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Status: Patch Available (was: Open) > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Attachment: HDFS-11608.000.patch Posted initial patch, will try to add some tests in the next. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > Attachments: HDFS-11608.000.patch > > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951715#comment-15951715 ] Xiaobing Zhou commented on HDFS-11608: -- After some debugging, it turns out it's related to Integer overflow. adjustChunkBoundary casts long to int in Math.min, resulting in one overflow (i.e. psize == -2147483648). Moreover, with the changes of computePacketChunkSize in HDFS-7308, (psize - PacketHeader.PKT_MAX_HEADER_LEN) leads to another overflow (i.e. bodySize is 2147483615 as a result of (2147483648 - 33)), so chunksPerPacket == 4161789, packetSize == 516 * 4161789 == 2147483124, finally causing out-of-mem and invalid payload issues. Note that without HDFS-7308, Math.max(psize/chunkSize, 1) won't have another overflow, it gives out 1 which is good. the code in HDFS-7308 {code} private void computePacketChunkSize(int psize, int csize) { +final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN; final int chunkSize = csize + getChecksumSize(); -chunksPerPacket = Math.max(psize/chunkSize, 1); +chunksPerPacket = Math.max(bodySize/chunkSize, 1); {code} DFSOutputStream#adjustChunkBoundary {code} if (!getStreamer().getAppendChunk()) { int psize = Math.min((int)(blockSize- getStreamer().getBytesCurBlock()), dfsClient.getConf().getWritePacketSize()); computePacketChunkSize(psize, bytesPerChecksum); } {code} > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951470#comment-15951470 ] Xiaobing Zhou commented on HDFS-11608: -- Posted it [~jojochuang], thanks. > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > > We've seen HDFS write crashes in the case of huge block size. For example, > writing a 3G file using 3G block size, HDFS client throws out of memory > exception. DataNode gives out IOException. After changing heap size limit, > DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe > and pipeline recovery. > Give below: > DN exception, > {noformat} > 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - > c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK > operation src: /192.168.64.101:47167 dst: /192.168.64.101:50010 > java.io.IOException: Incorrect value for packet payload size: 2147483128 > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:159) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Description: We've seen HDFS write crashes in the case of huge block size. For example, writing a 3G file using 3G block size, HDFS client throws out of memory exception. DataNode gives out IOException. After changing heap size limit, DFSOutputStream ResponseProcessor exception is seen followed by Broken pipe and pipeline recovery. Give below: Client out-of-mem exception, {noformat} 17/03/30 07:13:50 WARN hdfs.DFSClient: Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1245) at java.lang.Thread.join(Thread.java:1319) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:624) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:592) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hdfs.util.ByteArrayManager$NewByteArrayWithoutLimit.newByteArray(ByteArrayManager.java:308) at org.apache.hadoop.hdfs.DFSOutputStream.createPacket(DFSOutputStream.java:197) at org.apache.hadoop.hdfs.DFSOutputStream.writeChunkImpl(DFSOutputStream.java:1906) at org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:1884) at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206) at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163) at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2321) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2303) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244) at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261) at HdfsWriterOutputStream.run(HdfsWriterOutputStream.java:57) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at HdfsWriterOutputStream.main(HdfsWriterOutputStream.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) {noformat} Client ResponseProcessor exception, {noformat} 17/03/30 18:20:12 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-1828245847-192.168.64.101-1490851685890:blk_1073741859_1040 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:748) 17/03/30 18:22:32 WARN hdfs.DFSClient: DataStreamer Exception java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hadoop.hdfs.DFSPacket.writeTo(DFSPacket.java:176) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:522) {noformat} DN exception, {noformat} 2017-03-30 16:34:33,828 ERROR datanode.DataNode (DataXceiver.java:run(278)) - c6401.ambari.apache.org:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.64.101:47167 dst:
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Affects Version/s: 2.8.0 > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11608) HDFS write crashed in the case of huge block size
[ https://issues.apache.org/jira/browse/HDFS-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11608: - Component/s: hdfs-client > HDFS write crashed in the case of huge block size > - > > Key: HDFS-11608 > URL: https://issues.apache.org/jira/browse/HDFS-11608 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Critical > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11608) HDFS write crashed in the case of huge block size
Xiaobing Zhou created HDFS-11608: Summary: HDFS write crashed in the case of huge block size Key: HDFS-11608 URL: https://issues.apache.org/jira/browse/HDFS-11608 Project: Hadoop HDFS Issue Type: Bug Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11595) Clean up all the create() overloads in DFSClient
[ https://issues.apache.org/jira/browse/HDFS-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou reassigned HDFS-11595: Assignee: Xiaobing Zhou > Clean up all the create() overloads in DFSClient > > > Key: HDFS-11595 > URL: https://issues.apache.org/jira/browse/HDFS-11595 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: Xiaobing Zhou > > A follow on of HDFS-10996 discussion. Clean up all the create() overloads in > DFSClient, they seem unnecessary since DistributedFileSystem also has very > similar overloads for filling in defaults. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.004.patch v4 fixed some compile issues. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15944595#comment-15944595 ] Xiaobing Zhou commented on HDFS-11558: -- v3 still keeps NN IP address and port as part of thread name. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.003.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943889#comment-15943889 ] Xiaobing Zhou edited comment on HDFS-11558 at 3/27/17 7:20 PM: --- Posted v2. Thanks all for reviews. Since actor is instantiated per active or standby namenode, address of which is always available through conf. Do we need assemble it into actor thread name? [~arpitagarwal] Thanks. With v2 patch, thread name looks like: {noformat} 2017-03-27 12:11:12,548 [ heartbeating] INFO {noformat} {noformat} 2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating] {noformat} was (Author: xiaobingo): Posted v2. Thanks all for reviews. Since actor is instantiated per active or standby namenode, address of which is always available through conf. Do we need assemble it into actor thread name? [~arpitagarwal] Thanks. With v2 patch, thread name looks like: {noformat} 2017-03-27 12:11:12,548 [ heartbeating] INFO {noformat} 2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating] {noformat} {noformat} > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943889#comment-15943889 ] Xiaobing Zhou commented on HDFS-11558: -- Posted v2. Thanks all for reviews. Since actor is instantiated per active or standby namenode, address of which is always available through conf. Do we need assemble it into actor thread name? [~arpitagarwal] Thanks. With v2 patch, thread name looks like: {noformat} 2017-03-27 12:11:12,548 [ heartbeating] INFO {noformat} 2017-03-27 12:11:12,584 [BP-2084616792-10.22.6.77-1490641870531 heartbeating] {noformat} {noformat} > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.002.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940958#comment-15940958 ] Xiaobing Zhou commented on HDFS-11534: -- Moved it out. Good catch, thanks. > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch, HDFS-11534.003.patch, HDFS-11534.004.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: HDFS-11534.004.patch > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch, HDFS-11534.003.patch, HDFS-11534.004.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15939784#comment-15939784 ] Xiaobing Zhou commented on HDFS-11534: -- Posted v3 that addressed your comments, thanks [~arpiagariu] > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch, HDFS-11534.003.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: HDFS-11534.003.patch > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch, HDFS-11534.003.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.001.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937306#comment-15937306 ] Xiaobing Zhou commented on HDFS-11558: -- Posted v1. Thanks [~hanishakoneru]. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937255#comment-15937255 ] Xiaobing Zhou commented on HDFS-11558: -- Posted v0 patch. The actor thread name will be {noformat} [Block pool (Datanode Uuid unassigned) heartbeating to localhost/127.0.0.1:53006] {noformat} or {noformat} [Block pool BP-948066398-10.22.8.246-1490219911913 (Datanode Uuid 751c2964-af32-412d-8681-c3e4a2040804) heartbeating to localhost/127.0.0.1:53014] {noformat} > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Status: Patch Available (was: Open) > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558.000.patch > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-11558.000.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: HDFS-11534.002.patch > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: (was: HDFS-11534.002.patch) > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: HDFS-11534.002.patch v2 added JMX and test. > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch, > HDFS-11534.002.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936959#comment-15936959 ] Xiaobing Zhou commented on HDFS-11534: -- v1 adds some tests. > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: HDFS-11534.001.patch > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch, HDFS-11534.001.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11547) Add logs for slow BlockReceiver while writing data to disk
[ https://issues.apache.org/jira/browse/HDFS-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11547: - Attachment: HDFS-11547.000.patch > Add logs for slow BlockReceiver while writing data to disk > -- > > Key: HDFS-11547 > URL: https://issues.apache.org/jira/browse/HDFS-11547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: 2.9.0 > Attachments: HDFS-11547.000.patch > > > The logs for slow BlockReceiver while writing data to disk have been removed > accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11547) Add logs for slow BlockReceiver while writing data to disk
[ https://issues.apache.org/jira/browse/HDFS-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15930908#comment-15930908 ] Xiaobing Zhou commented on HDFS-11547: -- Posted v0 patch. > Add logs for slow BlockReceiver while writing data to disk > -- > > Key: HDFS-11547 > URL: https://issues.apache.org/jira/browse/HDFS-11547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: 2.9.0 > Attachments: HDFS-11547.000.patch > > > The logs for slow BlockReceiver while writing data to disk have been removed > accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11547) Add logs for slow BlockReceiver while writing data to disk
[ https://issues.apache.org/jira/browse/HDFS-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11547: - Status: Patch Available (was: Open) > Add logs for slow BlockReceiver while writing data to disk > -- > > Key: HDFS-11547 > URL: https://issues.apache.org/jira/browse/HDFS-11547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: 2.9.0 > Attachments: HDFS-11547.000.patch > > > The logs for slow BlockReceiver while writing data to disk have been removed > accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11547) Add logs for slow BlockReceiver while writing data to disk
[ https://issues.apache.org/jira/browse/HDFS-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11547: - Summary: Add logs for slow BlockReceiver while writing data to disk (was: Restore logs for slow BlockReceiver while writing data to disk) > Add logs for slow BlockReceiver while writing data to disk > -- > > Key: HDFS-11547 > URL: https://issues.apache.org/jira/browse/HDFS-11547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: 2.9.0 > > The logs for slow BlockReceiver while writing data to disk have been removed > accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11547) Restore logs for slow BlockReceiver while writing data to disk
[ https://issues.apache.org/jira/browse/HDFS-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11547: - Component/s: datanode > Restore logs for slow BlockReceiver while writing data to disk > -- > > Key: HDFS-11547 > URL: https://issues.apache.org/jira/browse/HDFS-11547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: 2.9.0 > > The logs for slow BlockReceiver while writing data to disk have been removed > accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11547) Restore logs for slow BlockReceiver while writing data to disk
[ https://issues.apache.org/jira/browse/HDFS-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11547: - Labels: 2.9.0 (was: ) > Restore logs for slow BlockReceiver while writing data to disk > -- > > Key: HDFS-11547 > URL: https://issues.apache.org/jira/browse/HDFS-11547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Labels: 2.9.0 > > The logs for slow BlockReceiver while writing data to disk have been removed > accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11547) Restore logs for slow BlockReceiver while writing data to disk
Xiaobing Zhou created HDFS-11547: Summary: Restore logs for slow BlockReceiver while writing data to disk Key: HDFS-11547 URL: https://issues.apache.org/jira/browse/HDFS-11547 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou The logs for slow BlockReceiver while writing data to disk have been removed accidentally. They should be added back. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929082#comment-15929082 ] Xiaobing Zhou commented on HDFS-10394: -- v2 addressed the comments, thanks [~arpitagarwal] > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10394: - Attachment: HDFS-10394.002.patch > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch, > HDFS-10394.002.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-2921) HA: HA docs need to cover decomissioning
[ https://issues.apache.org/jira/browse/HDFS-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou reassigned HDFS-2921: --- Assignee: Xiaobing Zhou > HA: HA docs need to cover decomissioning > > > Key: HDFS-2921 > URL: https://issues.apache.org/jira/browse/HDFS-2921 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation, ha >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Xiaobing Zhou > > We need to cover decomissioning in the HA docs as is done in the [federation > decomissioning > docs|http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html#Decommissioning]. > The same process should apply, we need to refresh all the namenodes (same > commands should work). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928851#comment-15928851 ] Xiaobing Zhou commented on HDFS-10394: -- Don't know what happened to previous build. I posted v1 patch to trigger new build. > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM
[ https://issues.apache.org/jira/browse/HDFS-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10394: - Attachment: HDFS-10394.001.patch > move declaration of okhttp version from hdfs-client to hadoop-project POM > - > > Key: HDFS-10394 > URL: https://issues.apache.org/jira/browse/HDFS-10394 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10394.000.patch, HDFS-10394.001.patch > > > The POM dependency on okhttp in hadoop-hdfs-client declares its version in > that POM instead. > the root declaration, including version, must go into the > hadoop-project/pom.xml so that its easy to track use and have only one place > if this version were ever to be incremented. As it stands, if any other > module picked up the library, they could adopt a different version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11170) Add create API in filesystem public class to support assign parameter through builder
[ https://issues.apache.org/jira/browse/HDFS-11170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15927221#comment-15927221 ] Xiaobing Zhou commented on HDFS-11170: -- Thanks [~zhouwei] for the work. Some comments: # Can we change the name (i.e. CreateBuilder) to be more self-descriptive, e.g. PathCreateBuilder or sth like that? # XXXBuilder#build usually returns the something being built, XXX for example, CreateBuilder#build gives out CreateBuilder itself, which is kind of mind mismatch. # Can you add comments to the public method FileSystem#create? > Add create API in filesystem public class to support assign parameter through > builder > - > > Key: HDFS-11170 > URL: https://issues.apache.org/jira/browse/HDFS-11170 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: Wei Zhou > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-11170-00.patch, HDFS-11170-01.patch, > HDFS-11170-02.patch, HDFS-11170-03.patch > > > FileSystem class supports multiple create functions to help user create file. > Some create functions has many parameters, it's hard for user to exactly > remember these parameters and their orders. This task is to add builder > based create functions to help user more easily create file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11536) Throttle DataNode slow BlockReceiver warnings
[ https://issues.apache.org/jira/browse/HDFS-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11536: - Description: There are a couple of warnings for every packet when peer is slow. See also BlockReceiver#datanodeSlowLogThresholdMs. In order to reduce verbosity, this proposes logging the warning per BlockReceiver in the first time, increasing counter, logging it again when BlockReceiver runs out of life cycle. was: There are too many warnings when peer is slow. See also BlockReceiver#datanodeSlowLogThresholdMs. This proposes logging the warning per BlockReceiver in the first time, increasing counter, logging it again when BlockReceiver runs out of life cycle. > Throttle DataNode slow BlockReceiver warnings > - > > Key: HDFS-11536 > URL: https://issues.apache.org/jira/browse/HDFS-11536 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > There are a couple of warnings for every packet when peer is slow. See also > BlockReceiver#datanodeSlowLogThresholdMs. In order to reduce verbosity, > this proposes logging the warning per BlockReceiver in the first time, > increasing counter, logging it again when BlockReceiver runs out of life > cycle. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11536) Throttle DataNode slow BlockReceiver warnings
[ https://issues.apache.org/jira/browse/HDFS-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11536: - Description: There are too many warnings when peer is slow. See also BlockReceiver#datanodeSlowLogThresholdMs. This proposes logging the warning per BlockReceiver in the first time, increasing counter, logging it again when BlockReceiver runs out of life cycle. > Throttle DataNode slow BlockReceiver warnings > - > > Key: HDFS-11536 > URL: https://issues.apache.org/jira/browse/HDFS-11536 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > There are too many warnings when peer is slow. See also > BlockReceiver#datanodeSlowLogThresholdMs. > This proposes logging the warning per BlockReceiver in the first time, > increasing counter, logging it again when BlockReceiver runs out of life > cycle. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11536) Throttle DataNode slow BlockReceiver warnings
[ https://issues.apache.org/jira/browse/HDFS-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11536: - Component/s: hdfs > Throttle DataNode slow BlockReceiver warnings > - > > Key: HDFS-11536 > URL: https://issues.apache.org/jira/browse/HDFS-11536 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11536) Throttle DataNode slow BlockReceiver warnings
[ https://issues.apache.org/jira/browse/HDFS-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11536: - Summary: Throttle DataNode slow BlockReceiver warnings (was: Suppress logs in BlockReceiver when peer is slow) > Throttle DataNode slow BlockReceiver warnings > - > > Key: HDFS-11536 > URL: https://issues.apache.org/jira/browse/HDFS-11536 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11536) Suppress logs in BlockReceiver when peer is slow
Xiaobing Zhou created HDFS-11536: Summary: Suppress logs in BlockReceiver when peer is slow Key: HDFS-11536 URL: https://issues.apache.org/jira/browse/HDFS-11536 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Status: Patch Available (was: Open) > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Attachment: HDFS-11534.000.patch > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11534.000.patch > > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11534) Add counters for number of blocks in pending IBR
Xiaobing Zhou created HDFS-11534: Summary: Add counters for number of blocks in pending IBR Key: HDFS-11534 URL: https://issues.apache.org/jira/browse/HDFS-11534 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou IBR can be sent in batch. For the sake of diagnosis, it's helpful to understand work load of pending queue, e.g. how many blocks as total, how many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11534) Add counters for number of blocks in pending IBR
[ https://issues.apache.org/jira/browse/HDFS-11534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11534: - Component/s: hdfs > Add counters for number of blocks in pending IBR > > > Key: HDFS-11534 > URL: https://issues.apache.org/jira/browse/HDFS-11534 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > IBR can be sent in batch. For the sake of diagnosis, it's helpful to > understand work load of pending queue, e.g. how many blocks as total, how > many blocks in status of RECEIVING_BLOCK, RECEIVED_BLOCK, and DELETED_BLOCK, > respectively. Also the metrics can be exposed to JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11476) Fix NPE in FsDatasetImpl#checkAndUpdate
[ https://issues.apache.org/jira/browse/HDFS-11476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894735#comment-15894735 ] Xiaobing Zhou commented on HDFS-11476: -- The test failures are not related. > Fix NPE in FsDatasetImpl#checkAndUpdate > --- > > Key: HDFS-11476 > URL: https://issues.apache.org/jira/browse/HDFS-11476 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11476.000.patch, HDFS-11476.001.patch, > HDFS-11476.002.patch, HDFS-11476.003.patch > > > diskMetaFile can be null and passed to compareTo which dereferences it, > causing NPE > {code} > // Compare generation stamp > if (memBlockInfo.getGenerationStamp() != diskGS) { > File memMetaFile = FsDatasetUtil.getMetaFile(diskFile, > memBlockInfo.getGenerationStamp()); > if (memMetaFile.exists()) { > if (memMetaFile.compareTo(diskMetaFile) != 0) { > LOG.warn("Metadata file in memory " > + memMetaFile.getAbsolutePath() > + " does not match file found by scan " > + (diskMetaFile == null? null: > diskMetaFile.getAbsolutePath())); > } > } else { > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11476) Fix NPE in FsDatasetImpl#checkAndUpdate
[ https://issues.apache.org/jira/browse/HDFS-11476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11476: - Attachment: (was: HDFS-11476.003.patch) > Fix NPE in FsDatasetImpl#checkAndUpdate > --- > > Key: HDFS-11476 > URL: https://issues.apache.org/jira/browse/HDFS-11476 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11476.000.patch, HDFS-11476.001.patch, > HDFS-11476.002.patch, HDFS-11476.003.patch > > > diskMetaFile can be null and passed to compareTo which dereferences it, > causing NPE > {code} > // Compare generation stamp > if (memBlockInfo.getGenerationStamp() != diskGS) { > File memMetaFile = FsDatasetUtil.getMetaFile(diskFile, > memBlockInfo.getGenerationStamp()); > if (memMetaFile.exists()) { > if (memMetaFile.compareTo(diskMetaFile) != 0) { > LOG.warn("Metadata file in memory " > + memMetaFile.getAbsolutePath() > + " does not match file found by scan " > + (diskMetaFile == null? null: > diskMetaFile.getAbsolutePath())); > } > } else { > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11476) Fix NPE in FsDatasetImpl#checkAndUpdate
[ https://issues.apache.org/jira/browse/HDFS-11476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11476: - Attachment: HDFS-11476.003.patch > Fix NPE in FsDatasetImpl#checkAndUpdate > --- > > Key: HDFS-11476 > URL: https://issues.apache.org/jira/browse/HDFS-11476 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-11476.000.patch, HDFS-11476.001.patch, > HDFS-11476.002.patch, HDFS-11476.003.patch > > > diskMetaFile can be null and passed to compareTo which dereferences it, > causing NPE > {code} > // Compare generation stamp > if (memBlockInfo.getGenerationStamp() != diskGS) { > File memMetaFile = FsDatasetUtil.getMetaFile(diskFile, > memBlockInfo.getGenerationStamp()); > if (memMetaFile.exists()) { > if (memMetaFile.compareTo(diskMetaFile) != 0) { > LOG.warn("Metadata file in memory " > + memMetaFile.getAbsolutePath() > + " does not match file found by scan " > + (diskMetaFile == null? null: > diskMetaFile.getAbsolutePath())); > } > } else { > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org