[ https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820752#comment-17820752 ]
ASF GitHub Bot commented on HDFS-17365: --------------------------------------- tasanuma commented on code in PR #6517: URL: https://github.com/apache/hadoop/pull/6517#discussion_r1502860434 ########## hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml: ########## @@ -3908,6 +3908,18 @@ </description> </property> +<property> + <name>dfs.client.ec.EXAMPLEECPOLICYNAME.checkstreamer.redunency</name> Review Comment: @hfutatzhanghb Thanks for the PR. I think it's a good feature. In my honest opinion, `dfs.client.ec.EXAMPLEECPOLICYNAME.checkstreamer.redunency` is counter-intuitive. I prefer a setting that interprets values in a reverse way. In other words, it would be something like `dfs.client.ec.EXAMPLEECPOLICYNAME.failed.write.block.tolerated`, where if the value is 0, then no failures are tolerated. And if it's 3, we can tolerate up to 3 failures in block writing. If the setting value is empty (it would be default), we can tolerate failures up to the number of parity blocks. This is just my personal view. > EC: Add extra redunency configuration in checkStreamerFailures to prevent > data loss. > ------------------------------------------------------------------------------------ > > Key: HDFS-17365 > URL: https://issues.apache.org/jira/browse/HDFS-17365 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ec > Reporter: farmmamba > Assignee: farmmamba > Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org