>  SUBCOMMAND is one of:
>
>
>Clients:
>       cacheadmin           configure the HDFS cache
>       classpath            prints the class path needed to get the hadoop jar 
> and the required libraries
>       crypto               configure HDFS encryption zones
>       ...
>
>Daemons:
>       balancer             run a cluster balancing utility
>       datanode             run a DFS datanode
>       namenode             run the DFS name node
>...
>---snip---


Absolutely, that is a great output, very clear and provides a very good user 
experience.

Thanks
Anu


On 9/9/16, 3:06 PM, "Allen Wittenauer" <a...@effectivemachines.com> wrote:

>
>> On Sep 9, 2016, at 2:15 PM, Anu Engineer <aengin...@hortonworks.com> wrote:
>> 
>> +1, Thanks for the effort. It brings in a world of consistency to the hadoop 
>> vars; and as usual reading your bash code was very educative.
>
>       Thanks!
>
>       There's still a handful of HDFS and MAPRED vars that begin with HADOOP, 
> but those should be trivial to knock out after a pattern has been established.
>
>> I had a minor suggestion though. since we have classified the _OPTS to 
>> client and daemon opts, for new people it is hard to know which of these 
>> subcommands are daemon vs. a client command.  Maybe we can add a special 
>> char in the help message to indicate which are daemons or just document it? 
>> Only way I know right now is to look the appropriate script and see if 
>> HADOOP_SUBCMD_SUPPORTDAEMONIZATION is set to true.
>
>
>       That's a great suggestion.  Would it be better if the usage output was 
> more like:
>
>---snip---
>Usage: hdfs [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
>
>  OPTIONS is none or any of:
>
>--buildpaths                       attempt to add class files from build tree
>--config dir                       Hadoop config directory
>--daemon (start|status|stop)       operate on a daemon
>--debug                            turn on shell script debug mode
>--help                             usage information
>--hostnames list[,of,host,names]   hosts to use in worker mode
>--hosts filename                   list of hosts to use in worker mode
>--loglevel level                   set the log4j level for this command
>--workers                          turn on worker mode
>
>  SUBCOMMAND is one of:
>
>
>Clients:
>       cacheadmin           configure the HDFS cache
>       classpath            prints the class path needed to get the hadoop jar 
> and the required libraries
>       crypto               configure HDFS encryption zones
>       ...
>
>Daemons:
>       balancer             run a cluster balancing utility
>       datanode             run a DFS datanode
>       namenode             run the DFS name node
>...
>---snip---
>
>       We do something similar in Apache Yetus and shouldn't be too hard to do 
> in Apache Hadoop. We couldn't read SUPPORTDAEMONIZATION to place things, but 
> as long as people put their new commands in the correct section in 
> hadoop_usage, it should work.
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to