daimin created HDFS-16282:
-----------------------------

             Summary: Duplicate generic usage information to hdfs debug command
                 Key: HDFS-16282
                 URL: https://issues.apache.org/jira/browse/HDFS-16282
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: tools
    Affects Versions: 3.3.1, 3.3.0
            Reporter: daimin
            Assignee: daimin


When we type 'hdfs debug' in console, the generic usage information will be 
repeated 4 times and the target command like 'verifyMeta' or 'recoverLease' 
seems hard to find.
{quote}~ $ hdfs debug
Usage: hdfs debug <command> [arguments]

These commands are for advanced users only.

Incorrect usages may result in data loss. Use at your own risk.

verifyMeta -meta <metadata-file> [-block <block-file>]

Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included 
in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

computeMeta -block <block-file> -out <output-metadata-file>

Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included 
in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

recoverLease -path <path> [-retries <num-retries>]

Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included 
in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]


Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included 
in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]
{quote}
 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to