[jira] [Resolved] (HDFS-17554) OIV: Print the storage policy name in OIV delimited output
[ https://issues.apache.org/jira/browse/HDFS-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang resolved HDFS-17554. -- Resolution: Not A Problem > OIV: Print the storage policy name in OIV delimited output > -- > > Key: HDFS-17554 > URL: https://issues.apache.org/jira/browse/HDFS-17554 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 3.5.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > Refer to adding the storage policy name to the OIV output instead of the > erasure coding policy. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17554) OIV: Print the storage policy name in OIV delimited output
Hualong Zhang created HDFS-17554: Summary: OIV: Print the storage policy name in OIV delimited output Key: HDFS-17554 URL: https://issues.apache.org/jira/browse/HDFS-17554 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 3.5.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Refer to adding the storage policy name to the OIV output instead of the erasure coding policy. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17525) Router web interfaces missing X-FRAME-OPTIONS security configurations
Hualong Zhang created HDFS-17525: Summary: Router web interfaces missing X-FRAME-OPTIONS security configurations Key: HDFS-17525 URL: https://issues.apache.org/jira/browse/HDFS-17525 Project: Hadoop HDFS Issue Type: Improvement Components: router Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Router web interfaces are missing X-FRAME-OPTIONS security configurations, we should complete them. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.
[ https://issues.apache.org/jira/browse/HDFS-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820292#comment-17820292 ] Hualong Zhang commented on HDFS-17146: -- [~ste...@apache.org] Apologies for the delayed response. Thank you very much for your suggestion! I will submit a new PR to improve this part. > Use the dfsadmin -reconfig command to initiate reconfiguration on all > decommissioning datanodes. > > > Key: HDFS-17146 > URL: https://issues.apache.org/jira/browse/HDFS-17146 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsadmin, hdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.1, 3.5.0 > > > If the *DFSAdmin* command could have the ability to perform bulk operations > across all decommissioned datanodes, that would be highly advantageous. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
[ https://issues.apache.org/jira/browse/HDFS-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17367: - Target Version/s: 3.5.0 > Add PercentUsed for Different StorageTypes in JMX > - > > Key: HDFS-17367 > URL: https://issues.apache.org/jira/browse/HDFS-17367 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics, namenode >Affects Versions: 3.5.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > Currently, the NameNode only displays PercentUsed for the entire cluster. We > plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
[ https://issues.apache.org/jira/browse/HDFS-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17367: - Component/s: metrics > Add PercentUsed for Different StorageTypes in JMX > - > > Key: HDFS-17367 > URL: https://issues.apache.org/jira/browse/HDFS-17367 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics, namenode >Affects Versions: 3.5.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > Currently, the NameNode only displays PercentUsed for the entire cluster. We > plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
Hualong Zhang created HDFS-17367: Summary: Add PercentUsed for Different StorageTypes in JMX Key: HDFS-17367 URL: https://issues.apache.org/jira/browse/HDFS-17367 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 3.5.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Currently, the NameNode only displays PercentUsed for the entire cluster. We plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17284) Fix int overflow in calculating numEcReplicatedTasks and numReplicationTasks during block recovery
Hualong Zhang created HDFS-17284: Summary: Fix int overflow in calculating numEcReplicatedTasks and numReplicationTasks during block recovery Key: HDFS-17284 URL: https://issues.apache.org/jira/browse/HDFS-17284 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Hualong Zhang Assignee: Hualong Zhang Fix int overflow in calculating numEcReplicatedTasks and numReplicationTasks during block recovery -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17180) HttpFS Add Support getTrashRoots API
Hualong Zhang created HDFS-17180: Summary: HttpFS Add Support getTrashRoots API Key: HDFS-17180 URL: https://issues.apache.org/jira/browse/HDFS-17180 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang We should ensure that WebHDFS remains synchronized with HttpFS, as the former has already implemented the *getTrashRoots* interface. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16799) The dn space size is not consistent, and Balancer can not work, resulting in a very unbalanced space
[ https://issues.apache.org/jira/browse/HDFS-16799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang reassigned HDFS-16799: Assignee: (was: Hualong Zhang) > The dn space size is not consistent, and Balancer can not work, resulting in > a very unbalanced space > > > Key: HDFS-16799 > URL: https://issues.apache.org/jira/browse/HDFS-16799 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.0 >Reporter: ruiliang >Priority: Blocker > > > {code:java} > echo 'A DFS Used 99.8% to ip' > sorucehost > hdfs --debug balancer -fs hdfs://xxcluster06 -threshold 10 -source -f > sorucehost > > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.243:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.247:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-15-10/10.12.65.214:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-02-08/10.12.14.8:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.154:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-04/10.12.65.218:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.143:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-05/10.12.12.200:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.217:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.142:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.246:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.219:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.147:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-15-10/10.12.65.186:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.153:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-07/10.12.19.23:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-04-14/10.12.65.119:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.131:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-04/10.12.12.210:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-11/10.12.14.168:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.245:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-02/10.12.17.26:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.241:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.152:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.249:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-07-14/10.12.64.71:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-03/10.12.17.35:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.195:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.242:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.248:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.240:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-15-12/10.12.65.196:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.150:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.222:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.145:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.244:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-07/10.12.19.22:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.221:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.136:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.129:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-15/10.12.15.163:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: >
[jira] [Assigned] (HDFS-16799) The dn space size is not consistent, and Balancer can not work, resulting in a very unbalanced space
[ https://issues.apache.org/jira/browse/HDFS-16799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang reassigned HDFS-16799: Assignee: Hualong Zhang > The dn space size is not consistent, and Balancer can not work, resulting in > a very unbalanced space > > > Key: HDFS-16799 > URL: https://issues.apache.org/jira/browse/HDFS-16799 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.1.0 >Reporter: ruiliang >Assignee: Hualong Zhang >Priority: Blocker > > > {code:java} > echo 'A DFS Used 99.8% to ip' > sorucehost > hdfs --debug balancer -fs hdfs://xxcluster06 -threshold 10 -source -f > sorucehost > > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.243:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.247:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-15-10/10.12.65.214:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-02-08/10.12.14.8:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.154:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-04/10.12.65.218:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.143:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-05/10.12.12.200:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.217:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.142:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.246:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.219:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.147:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-15-10/10.12.65.186:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.153:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-07/10.12.19.23:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-04-14/10.12.65.119:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.131:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-04/10.12.12.210:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-11/10.12.14.168:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.245:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-02/10.12.17.26:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.241:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.152:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.249:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-07-14/10.12.64.71:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-03/10.12.17.35:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.195:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.242:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.248:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.240:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-15-12/10.12.65.196:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-13/10.12.15.150:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.222:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.145:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-01-08/10.12.65.244:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-03-07/10.12.19.22:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.221:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.136:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-12-03/10.12.65.129:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding a new node: > /4F08-05-15/10.12.15.163:1019 > 22/10/09 16:43:52 INFO net.NetworkTopology: Adding
[jira] [Commented] (HDFS-17168) Support getTrashRoots API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-17168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761521#comment-17761521 ] Hualong Zhang commented on HDFS-17168: -- [~slfan1989] Thank you for your assistance in reviewing the code! > Support getTrashRoots API in WebHDFS > > > Key: HDFS-17168 > URL: https://issues.apache.org/jira/browse/HDFS-17168 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-08-26-23-13-42-426.png > > > WebHDFS should support getTrashRoots: > !image-2023-08-26-23-13-42-426.png|width=686,height=204! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17168) Support getTrashRoots API in WebHDFS
Hualong Zhang created HDFS-17168: Summary: Support getTrashRoots API in WebHDFS Key: HDFS-17168 URL: https://issues.apache.org/jira/browse/HDFS-17168 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Attachments: image-2023-08-26-23-13-42-426.png WebHDFS should support getTrashRoots: !image-2023-08-26-23-13-42-426.png|width=686,height=204! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.
Hualong Zhang created HDFS-17146: Summary: Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes. Key: HDFS-17146 URL: https://issues.apache.org/jira/browse/HDFS-17146 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang If the *DFSAdmin* command could have the ability to perform bulk operations across all decommissioned datanodes, that would be highly advantageous. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17122) Rectify the table length discrepancy in the DataNode UI.
[ https://issues.apache.org/jira/browse/HDFS-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17122: - Description: The hidden column settings in *table-datanodes.dataTable* have caused an error in the calculation of table length in {*}dataTable{*}. !image-2023-07-25-18-12-10-582.png|width=798,height=318! was:!image-2023-07-25-18-12-10-582.png|width=798,height=318! > Rectify the table length discrepancy in the DataNode UI. > > > Key: HDFS-17122 > URL: https://issues.apache.org/jira/browse/HDFS-17122 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Attachments: image-2023-07-25-18-12-10-582.png > > > The hidden column settings in *table-datanodes.dataTable* have caused an > error in the calculation of table length in {*}dataTable{*}. > !image-2023-07-25-18-12-10-582.png|width=798,height=318! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17115) HttpFS Add Support getErasureCodeCodecs API
[ https://issues.apache.org/jira/browse/HDFS-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17747740#comment-17747740 ] Hualong Zhang commented on HDFS-17115: -- [~ayushtkn] [~slfan1989] Thank you for your assistance in reviewing the code! > HttpFS Add Support getErasureCodeCodecs API > --- > > Key: HDFS-17115 > URL: https://issues.apache.org/jira/browse/HDFS-17115 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as > the former has already implemented the *getErasureCodeCodecs* interface. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17122) Rectify the table length discrepancy in the DataNode UI.
[ https://issues.apache.org/jira/browse/HDFS-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17122: - Description: !image-2023-07-25-18-12-10-582.png|width=798,height=318! (was: !image-2023-07-25-18-12-10-582.png|width=580,height=231!) > Rectify the table length discrepancy in the DataNode UI. > > > Key: HDFS-17122 > URL: https://issues.apache.org/jira/browse/HDFS-17122 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Attachments: image-2023-07-25-18-12-10-582.png > > > !image-2023-07-25-18-12-10-582.png|width=798,height=318! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17122) Rectify the table length discrepancy in the DataNode UI.
Hualong Zhang created HDFS-17122: Summary: Rectify the table length discrepancy in the DataNode UI. Key: HDFS-17122 URL: https://issues.apache.org/jira/browse/HDFS-17122 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Attachments: image-2023-07-25-18-12-10-582.png !image-2023-07-25-18-12-10-582.png|width=580,height=231! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17115) HttpFS Add Support getErasureCodeCodecs API
Hualong Zhang created HDFS-17115: Summary: HttpFS Add Support getErasureCodeCodecs API Key: HDFS-17115 URL: https://issues.apache.org/jira/browse/HDFS-17115 Project: Hadoop HDFS Issue Type: Improvement Components: httpfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as the former has already implemented the *getErasureCodeCodecs* interface. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17083: - Description: WebHDFS should support getErasureCodeCodecs: !image-2023-07-12-22-52-15-954.png|width=799,height=210! was: WebHDFS should support getErasureCodeCodecs: !image-2023-07-12-22-52-15-954.png|width=643,height=169! > Support getErasureCodeCodecs API in WebHDFS > --- > > Key: HDFS-17083 > URL: https://issues.apache.org/jira/browse/HDFS-17083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Attachments: image-2023-07-12-22-52-15-954.png > > > WebHDFS should support getErasureCodeCodecs: > !image-2023-07-12-22-52-15-954.png|width=799,height=210! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS
Hualong Zhang created HDFS-17083: Summary: Support getErasureCodeCodecs API in WebHDFS Key: HDFS-17083 URL: https://issues.apache.org/jira/browse/HDFS-17083 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Attachments: image-2023-07-12-22-52-15-954.png WebHDFS should support getErasureCodeCodecs: !image-2023-07-12-22-52-15-954.png|width=643,height=169! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Improve BlockPlacementPolicyRackFaultTolerant to avoid choose nodes failed when no enough Rack.
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Component/s: namanode (was: ec) Summary: Improve BlockPlacementPolicyRackFaultTolerant to avoid choose nodes failed when no enough Rack. (was: Erasure coding reconstruction failed when num of storageType rack NOT enough) > Improve BlockPlacementPolicyRackFaultTolerant to avoid choose nodes failed > when no enough Rack. > --- > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack-1.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png|width=962,height=604! > > > > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. > !failed reconstruction ec in same rack-1.png|width=946,height=413! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Affects Version/s: 3.4.0 (was: 3.3.4) > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack-1.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png|width=962,height=604! > > > > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. > !failed reconstruction ec in same rack-1.png|width=946,height=413! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Attachment: (was: failed reconstruction ec in same rack.png) > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack-1.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png|width=962,height=604! > > > > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. > !failed reconstruction ec in same rack.png|width=946,height=413! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Description: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack !write ec in same rack.png|width=962,height=604! However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. !failed reconstruction ec in same rack-1.png|width=946,height=413! was: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack !write ec in same rack.png|width=962,height=604! However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. !failed reconstruction ec in same rack.png|width=946,height=413! > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack-1.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png|width=962,height=604! > > > > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. > !failed reconstruction ec in same rack-1.png|width=946,height=413! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Attachment: failed reconstruction ec in same rack-1.png > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack-1.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png|width=962,height=604! > > > > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. > !failed reconstruction ec in same rack.png|width=946,height=413! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Description: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack !write ec in same rack.png|width=962,height=604! However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. !failed reconstruction ec in same rack.png|width=946,height=413! was: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack !write ec in same rack.png! However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png|width=962,height=604! > > > > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. > !failed reconstruction ec in same rack.png|width=946,height=413! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Attachment: failed reconstruction ec in same rack.png > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: failed reconstruction ec in same rack.png, write ec in > same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png! > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Attachment: write ec in same rack.png > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: write ec in same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Description: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack !write ec in same rack.png! However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. was: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: write ec in same rack.png > > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > !write ec in same rack.png! > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Attachment: image-2023-06-19-23-30-26-931.png > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Attachment: (was: image-2023-06-19-23-30-26-931.png) > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Description: When writing EC data, if the number of racks matching the storageType is insufficient, more than one block are allowed to be written to the same rack However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. was: When writing EC data, if the number of racks with DN that match the storageType of the file is insufficient, multiple data blocks are allowed to be written to the same rack. However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > When writing EC data, if the number of racks matching the storageType is > insufficient, more than one block are allowed to be written to the same rack > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Description: When writing EC data, if the number of racks with DN that match the storageType of the file is insufficient, multiple data blocks are allowed to be written to the same rack. However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior. was: When writing EC data, if the number of racks with DN that match the storageType of the file is insufficient, multiple data blocks are allowed to be written to the same rack. However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior." > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > When writing EC data, if the number of racks with DN that match the > storageType of the file is insufficient, multiple data blocks are allowed to > be written to the same rack. > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
[ https://issues.apache.org/jira/browse/HDFS-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17052: - Description: When writing EC data, if the number of racks with DN that match the storageType of the file is insufficient, multiple data blocks are allowed to be written to the same rack. However, during EC block recovery, it is not possible to recover on the same rack, which deviates from the expected behavior." > Erasure coding reconstruction failed when num of storageType rack NOT enough > > > Key: HDFS-17052 > URL: https://issues.apache.org/jira/browse/HDFS-17052 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec >Affects Versions: 3.3.4 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > When writing EC data, if the number of racks with DN that match the > storageType of the file is insufficient, multiple data blocks are allowed to > be written to the same rack. > However, during EC block recovery, it is not possible to recover on the same > rack, which deviates from the expected behavior." -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17052) Erasure coding reconstruction failed when num of storageType rack NOT enough
Hualong Zhang created HDFS-17052: Summary: Erasure coding reconstruction failed when num of storageType rack NOT enough Key: HDFS-17052 URL: https://issues.apache.org/jira/browse/HDFS-17052 Project: Hadoop HDFS Issue Type: Bug Components: ec Affects Versions: 3.3.4 Reporter: Hualong Zhang Assignee: Hualong Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17043) HttpFS implementation for getAllErasureCodingPolicies
Hualong Zhang created HDFS-17043: Summary: HttpFS implementation for getAllErasureCodingPolicies Key: HDFS-17043 URL: https://issues.apache.org/jira/browse/HDFS-17043 Project: Hadoop HDFS Issue Type: Improvement Components: httpfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang HttpFS should support getAllErasureCodingPolicies API In order to be able to retrieve all Erasure Coding Policies.. WebHdfs implementation available on HDFS-17029. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17029) Support getECPolices API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17029: - Description: WebHDFS should support getEcPolicies: !image-2023-05-29-23-55-09-224.png|width=817,height=234! was: WebHDFS should support getEcPolicies: !image-2023-05-29-23-55-09-224.png! > Support getECPolices API in WebHDFS > --- > > Key: HDFS-17029 > URL: https://issues.apache.org/jira/browse/HDFS-17029 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Attachments: image-2023-05-29-23-55-09-224.png > > > WebHDFS should support getEcPolicies: > !image-2023-05-29-23-55-09-224.png|width=817,height=234! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17029) Support getECPolices API in WebHDFS
Hualong Zhang created HDFS-17029: Summary: Support getECPolices API in WebHDFS Key: HDFS-17029 URL: https://issues.apache.org/jira/browse/HDFS-17029 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Attachments: image-2023-05-29-23-55-09-224.png WebHDFS should support getEcPolicies: !image-2023-05-29-23-55-09-224.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17014) HttpFS Add Support getStatus API
[ https://issues.apache.org/jira/browse/HDFS-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725809#comment-17725809 ] Hualong Zhang commented on HDFS-17014: -- [~ayushtkn] Thank you for your reminder! I After careful examination, we have discovered that this segment of logic in {*}BaseTestHttpFSWith{*}#{*}testGetStatus{*} may potentially cause the issue. {noformat} FsStatus dfsFsStatus = dfs.getStatus(path); FsStatus httpFsStatus = httpFs.getStatus(path); // Validate used free and capacity are the same as DistributedFileSystem assertEquals(dfsFsStatus.getUsed(), httpFsStatus.getUsed()); assertEquals(dfsFsStatus.getRemaining(), httpFsStatus.getRemaining()); assertEquals(dfsFsStatus.getCapacity(), httpFsStatus.getCapacity()); {noformat} The reasons are as follows: 1.The *getStatus* API used to retrieve the used/remaining space of the FileSystem may pose a problem. In {*}TestHttpFSWithHttpFSFileSystem{*}, all *unit tests* share a single FileSystem and are executed in parallel, with file writes or deletions occurring randomly. As a result, the usage size of the file system may change at any time, causing the used/remaining values returned by *httpFs.getStatus* to potentially differ from those returned by {*}dfs.getStatus{*}, thereby resulting in test failures. 2.When conducting *unit tests* in {*}TestWebHDFS{*}, we utilized a separate *MiniDFSCluster* to avoid encountering this issue. We have the option of modifying the *unit tests* to only validate whether used/remaining/capacity >= 0, or alternatively, using a new MiniDFSCluster in this particular test to ensure that any writes or deletions performed by other unit tests do not impact our test results. > HttpFS Add Support getStatus API > > > Key: HDFS-17014 > URL: https://issues.apache.org/jira/browse/HDFS-17014 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as > the former has already implemented the *getStatus* interface. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17014) HttpFS Add Support getStatus API
[ https://issues.apache.org/jira/browse/HDFS-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725156#comment-17725156 ] Hualong Zhang commented on HDFS-17014: -- [~ayushtkn] Thank you for bringing this to our attention!! We will investigate the related issues immediately and resolve them as soon as possible. > HttpFS Add Support getStatus API > > > Key: HDFS-17014 > URL: https://issues.apache.org/jira/browse/HDFS-17014 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as > the former has already implemented the *getStatus* interface. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17014) HttpFS Add Support getStatus API
[ https://issues.apache.org/jira/browse/HDFS-17014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-17014: - Description: We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as the former has already implemented the *getStatus* interface. (was: We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as the former has already implemented the *getStatus* interface. WebHDFS: HTTPFS: ) > HttpFS Add Support getStatus API > > > Key: HDFS-17014 > URL: https://issues.apache.org/jira/browse/HDFS-17014 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as > the former has already implemented the *getStatus* interface. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17014) HttpFS Add Support getStatus API
Hualong Zhang created HDFS-17014: Summary: HttpFS Add Support getStatus API Key: HDFS-17014 URL: https://issues.apache.org/jira/browse/HDFS-17014 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang We should ensure that *WebHDFS* remains synchronized with {*}HttpFS{*}, as the former has already implemented the *getStatus* interface. WebHDFS: HTTPFS: -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17001) Support getStatus API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722466#comment-17722466 ] Hualong Zhang commented on HDFS-17001: -- [~ayushtkn] [~slfan1989] Thank you for your assistance in reviewing the code! > Support getStatus API in WebHDFS > > > Key: HDFS-17001 > URL: https://issues.apache.org/jira/browse/HDFS-17001 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-05-08-14-34-51-873.png > > > WebHDFS should support getStatus: > !image-2023-05-08-14-34-51-873.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17001) Support getStatus API in WebHDFS
Hualong Zhang created HDFS-17001: Summary: Support getStatus API in WebHDFS Key: HDFS-17001 URL: https://issues.apache.org/jira/browse/HDFS-17001 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang Attachments: image-2023-05-08-14-34-51-873.png WebHDFS should support getStatus: !image-2023-05-08-14-34-51-873.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16990) HttpFS Add Support getFileLinkStatus API
[ https://issues.apache.org/jira/browse/HDFS-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16990: - Summary: HttpFS Add Support getFileLinkStatus API (was: HttpFS: Add Support getFileLinkStatus API) > HttpFS Add Support getFileLinkStatus API > > > Key: HDFS-16990 > URL: https://issues.apache.org/jira/browse/HDFS-16990 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > > HttpFS should implement the *getFileLinkStatus* API already implemented in > WebHDFS. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16990) HttpFS: Add Support getFileLinkStatus API
Hualong Zhang created HDFS-16990: Summary: HttpFS: Add Support getFileLinkStatus API Key: HDFS-16990 URL: https://issues.apache.org/jira/browse/HDFS-16990 Project: Hadoop HDFS Issue Type: Improvement Components: httpfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Assignee: Hualong Zhang HttpFS should implement the *getFileLinkStatus* API already implemented in WebHDFS. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16981) Support getFileLinkStatus API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16981: - Description: WebHDFS should support getFileLinkStatus: !image-2023-04-13-23-41-51-380.png|width=670,height=187! was: WebHDFS should support getFileLinkStatus: !image-2023-04-13-23-41-51-380.png! > Support getFileLinkStatus API in WebHDFS > > > Key: HDFS-16981 > URL: https://issues.apache.org/jira/browse/HDFS-16981 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Priority: Major > Attachments: image-2023-04-13-23-41-51-380.png > > > WebHDFS should support getFileLinkStatus: > !image-2023-04-13-23-41-51-380.png|width=670,height=187! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16981) Support getFileLinkStatus API in WebHDFS
Hualong Zhang created HDFS-16981: Summary: Support getFileLinkStatus API in WebHDFS Key: HDFS-16981 URL: https://issues.apache.org/jira/browse/HDFS-16981 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Attachments: image-2023-04-13-23-41-51-380.png WebHDFS should support getFileLinkStatus: !image-2023-04-13-23-41-51-380.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16952) Support getLinkTarget API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-16952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16952: - Description: Support getLinkTarget API in WebHDFS (was: Add description of GETSERVERDEFAULTS to WebHDFS doc) > Support getLinkTarget API in WebHDFS > > > Key: HDFS-16952 > URL: https://issues.apache.org/jira/browse/HDFS-16952 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Priority: Minor > > Support getLinkTarget API in WebHDFS -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16952) Support getLinkTarget API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-16952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16952: - Summary: Support getLinkTarget API in WebHDFS (was: Add description of GETSERVERDEFAULTS to WebHDFS doc) > Support getLinkTarget API in WebHDFS > > > Key: HDFS-16952 > URL: https://issues.apache.org/jira/browse/HDFS-16952 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Priority: Minor > > Add description of GETSERVERDEFAULTS to WebHDFS doc -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16952) Add description of GETSERVERDEFAULTS to WebHDFS doc
Hualong Zhang created HDFS-16952: Summary: Add description of GETSERVERDEFAULTS to WebHDFS doc Key: HDFS-16952 URL: https://issues.apache.org/jira/browse/HDFS-16952 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Add description of GETSERVERDEFAULTS to WebHDFS doc -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16951) Add description of GETSERVERDEFAULTS to WebHDFS doc
Hualong Zhang created HDFS-16951: Summary: Add description of GETSERVERDEFAULTS to WebHDFS doc Key: HDFS-16951 URL: https://issues.apache.org/jira/browse/HDFS-16951 Project: Hadoop HDFS Issue Type: Improvement Components: webhdfs Affects Versions: 3.4.0 Reporter: Hualong Zhang Add description of GETSERVERDEFAULTS to WebHDFS doc -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16916) Improve the use of JUnit Test in DFSClient
Hualong Zhang created HDFS-16916: Summary: Improve the use of JUnit Test in DFSClient Key: HDFS-16916 URL: https://issues.apache.org/jira/browse/HDFS-16916 Project: Hadoop HDFS Issue Type: Improvement Components: dfsclient Affects Versions: 3.4.0 Reporter: Hualong Zhang Improve the use of JUnit Test in DFSClient -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16893) Standardize the usage of DFSClient debug log
[ https://issues.apache.org/jira/browse/HDFS-16893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16893: - Fix Version/s: (was: 3.4.0) Target Version/s: 3.4.0 Affects Version/s: 3.4.0 > Standardize the usage of DFSClient debug log > > > Key: HDFS-16893 > URL: https://issues.apache.org/jira/browse/HDFS-16893 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Priority: Minor > > Standardize the usage of SLF4J in debug log in DFSClient -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16893) Standardize the usage of DFSClient debug log
Hualong Zhang created HDFS-16893: Summary: Standardize the usage of DFSClient debug log Key: HDFS-16893 URL: https://issues.apache.org/jira/browse/HDFS-16893 Project: Hadoop HDFS Issue Type: Improvement Components: dfsclient Reporter: Hualong Zhang Fix For: 3.4.0 Standardize the usage of SLF4J in debug log in DFSClient -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16292) The DFS Input Stream is waiting to be read
[ https://issues.apache.org/jira/browse/HDFS-16292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16292: - Attachment: HDFS-16292.path Status: Patch Available (was: Open) > The DFS Input Stream is waiting to be read > -- > > Key: HDFS-16292 > URL: https://issues.apache.org/jira/browse/HDFS-16292 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.5.2 >Reporter: Hualong Zhang >Priority: Minor > Attachments: HDFS-16292.path, image-2021-11-01-18-36-54-329.png > > > The input stream has been waiting.The problem seems to be that > BlockReaderPeer#peer does not set ReadTimeout and WriteTimeout.We can solve > this problem by setting the timeout in BlockReaderFactory#nextTcpPeer > Jstack as follows > !image-2021-11-01-18-36-54-329.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16292) The DFS Input Stream is waiting to be read
Hualong Zhang created HDFS-16292: Summary: The DFS Input Stream is waiting to be read Key: HDFS-16292 URL: https://issues.apache.org/jira/browse/HDFS-16292 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.5.2 Reporter: Hualong Zhang Attachments: image-2021-11-01-18-36-54-329.png The input stream has been waiting.The problem seems to be that BlockReaderPeer#peer does not set ReadTimeout and WriteTimeout.We can solve this problem by setting the timeout in BlockReaderFactory#nextTcpPeer Jstack as follows !image-2021-11-01-18-36-54-329.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: HDFS-16243.0.patch Fix Version/s: 2.7.2 Target Version/s: 2.7.2 Status: Patch Available (was: Open) > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Fix For: 2.7.2 > > Attachments: HDFS-16243.0.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: (was: HDFS-16243.patch) > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Flags: Patch > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Attachments: HDFS-16243.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: HDFS-16243.patch > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Attachments: HDFS-16243.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: (was: HDFS-16243.patch) > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
[ https://issues.apache.org/jira/browse/HDFS-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hualong Zhang updated HDFS-16243: - Attachment: HDFS-16243.patch > The available disk space is less than the reserved space, and no log message > is displayed > - > > Key: HDFS-16243 > URL: https://issues.apache.org/jira/browse/HDFS-16243 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.7.2 >Reporter: Hualong Zhang >Priority: Major > Attachments: HDFS-16243.patch > > > When I submitted a task to the hadoop test cluster, it appeared "could only > be replicated to 0 nodes instead of minReplication (=1)" > I checked the namenode and datanode logs and did not find any error logs. It > was not until the use of dfsadmin -report that the available capacity was 0 > and I realized that it may be a configuration problem. > Checking the configuration found that the value of the > "dfs.datanode.du.reserved" configuration is greater than the available disk > space of HDFS, which caused this problem > It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16243) The available disk space is less than the reserved space, and no log message is displayed
Hualong Zhang created HDFS-16243: Summary: The available disk space is less than the reserved space, and no log message is displayed Key: HDFS-16243 URL: https://issues.apache.org/jira/browse/HDFS-16243 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.2 Reporter: Hualong Zhang When I submitted a task to the hadoop test cluster, it appeared "could only be replicated to 0 nodes instead of minReplication (=1)" I checked the namenode and datanode logs and did not find any error logs. It was not until the use of dfsadmin -report that the available capacity was 0 and I realized that it may be a configuration problem. Checking the configuration found that the value of the "dfs.datanode.du.reserved" configuration is greater than the available disk space of HDFS, which caused this problem It seems that there should be some warnings or errors in the log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org