[ 
https://issues.apache.org/jira/browse/HDFS-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800134#comment-17800134
 ] 

ASF GitHub Bot commented on HDFS-17056:
---------------------------------------

haiyang1987 commented on code in PR #6379:
URL: https://github.com/apache/hadoop/pull/6379#discussion_r1435759118


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java:
##########
@@ -642,6 +642,10 @@ public int run(Configuration conf, List<String> args) 
throws IOException {
           throw e;
         }
       } else {
+        if (args.size() > 0) {
+          System.err.println(getName() + ": Too many arguments");

Review Comment:
   `System.err.println(getName() + ": Input invalid arguments.\nUsage: " + 
getLongUsage());` 
   How about it?





> EC: Fix verifyClusterSetup output in case of an invalid param.
> --------------------------------------------------------------
>
>                 Key: HDFS-17056
>                 URL: https://issues.apache.org/jira/browse/HDFS-17056
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ec
>            Reporter: Ayush Saxena
>            Assignee: huangzhaobo99
>            Priority: Major
>              Labels: newbie, pull-request-available
>
> {code:java}
> bin/hdfs ec  -verifyClusterSetup XOR-2-1-1024k        
> 9 DataNodes are required for the erasure coding policies: RS-6-3-1024k, 
> XOR-2-1-1024k. The number of DataNodes is only 3. {code}
> verifyClusterSetup requires -policy then the name of policies, else it 
> defaults to all enabled policies.
> In case there are additional invalid options it silently ignores them, unlike 
> other EC commands which throws out Too Many Argument exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to