This is an automated email from the ASF dual-hosted git repository. aljoscha pushed a commit to branch release-1.10 in repository https://gitbox.apache.org/repos/asf/flink.git
commit ae4853df268f3211ed091003621916f72af843c7 Author: Aljoscha Krettek <aljos...@apache.org> AuthorDate: Thu Jan 23 11:09:41 2020 +0100 [FLINK-15276] Bring cli.zh.md in line with recent changes to cli.md --- docs/ops/cli.zh.md | 158 +++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 122 insertions(+), 36 deletions(-) diff --git a/docs/ops/cli.zh.md b/docs/ops/cli.zh.md index 54846e7..7ff07e9 100644 --- a/docs/ops/cli.zh.md +++ b/docs/ops/cli.zh.md @@ -40,9 +40,33 @@ The command line can be used to A prerequisite to using the command line interface is that the Flink master (JobManager) has been started (via -`<flink-home>/bin/start-cluster.sh`) or that a YARN environment is +`<flink-home>/bin/start-cluster.sh`) or that another deployment target such as YARN or Kubernetes is available. +### Deployment targets + +Flink has the concept of executors for defining available deployment targets. You can see the +available executors in the output of `bin/flink --help`, for example: + +``` +Options for executor mode: + -D <property=value> Generic configuration options for + execution/deployment and for the configured executor. + The available options can be found at + https://ci.apache.org/projects/flink/flink-docs-stabl + e/ops/config.html + -e,--executor <arg> The name of the executor to be used for executing the + given job, which is equivalent to the + "execution.target" config option. The currently + available executors are: "remote", "local", + "kubernetes-session", "yarn-per-job", "yarn-session". +``` + +When running one of the `bin/flink` actions, the executor is specified using the `--executor` +option. + + + * This will be replaced by the TOC {:toc} @@ -88,12 +112,12 @@ available. ./examples/batch/WordCount.jar \ --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out -- Run example program using a [per-job YARN cluster]({{site.baseurl}}/zh/ops/deployment/yarn_setup.html#run-a-single-flink-job-on-hadoop-yarn) with 2 TaskManagers: +- Run example program using a [per-job YARN cluster]({{site.baseurl}}/ops/deployment/yarn_setup.html#run-a-single-flink-job-on-hadoop-yarn) with 2 TaskManagers: - ./bin/flink run -m yarn-cluster \ + ./bin/flink run -m yarn-cluster \ ./examples/batch/WordCount.jar \ --input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out - + </div> <div data-lang="python" markdown="1"> @@ -292,9 +316,9 @@ Action "run" compiles and runs a program. Syntax: run [OPTIONS] <jar-file> <arguments> "run" action options: -c,--class <classname> Class with the program entry point - ("main()" method or "getPlan()" method). - Only needed if the JAR file does not - specify the class in its manifest. + ("main()" method). Only needed if the + JAR file does not specify the class in + its manifest. -C,--classpath <url> Adds a URL to each user code classloader on all nodes in the cluster. The paths must specify a @@ -317,8 +341,10 @@ Action "run" compiles and runs a program. program. Optional flag to override the default value specified in the configuration. - -py,--python <pythonFile> 指定Python作业的入口,依赖的资源文件可以通过 - `--pyFiles`进行指定。 + -py,--python <pythonFile> Python script with the program entry + point. The dependent resources can be + configured with the `--pyFiles` + option. -pyarch,--pyArchives <arg> Add python archive files for job. The archive files will be extracted to the working directory of python UDF @@ -356,15 +382,20 @@ Action "run" compiles and runs a program. (version >= 37.0.0). Please ensure that the specified environment meets the above requirements. - -pyfs,--pyFiles <pythonFiles> 指定Python作业依赖的一些自定义的python文件。 - 这些文件会被添加到本地client和远端UDF worker - 的PYTHONPATH中。支持常用的python资源文件, - 例如(.py/.egg/.zip)。 如果有多个文件,可以通过 - 逗号(,)进行分隔。(例如: - --pyFiles file:///tmp/myresource.zip - ,hdfs:///$namenode_address/myresource2.zip) - -pym,--pyModule <pythonModule> 指定python程序的运行的模块入口,这个选项必须配合 - `--pyFiles`一起使用。 + -pyfs,--pyFiles <pythonFiles> Attach custom python files for job. + These files will be added to the + PYTHONPATH of both the local client + and the remote python UDF worker. The + standard python resource file suffixes + such as .py/.egg/.zip or directory are + all supported. Comma (',') could be + used as the separator to specify + multiple files (e.g.: --pyFiles + file:///tmp/myresource.zip,hdfs:///$na + menode_address/myresource2.zip). + -pym,--pyModule <pythonModule> Python module with the program entry + point. This option must be used in + conjunction with `--pyFiles`. -pyreq,--pyRequirements <arg> Specify a requirements.txt file which defines the third-party dependencies. These dependencies will be installed @@ -376,7 +407,7 @@ Action "run" compiles and runs a program. if the optional parameter exists (e.g.: --pyRequirements file:///tmp/requirements.txt#file:///t - mp/cached_dir). + mp/cached_dir). -q,--sysoutLogging If present, suppress logging output to standard out. -s,--fromSavepoint <savepointPath> Path to a savepoint to restore the job @@ -395,11 +426,6 @@ Action "run" compiles and runs a program. connect to a different JobManager than the one specified in the configuration. - -sae,--shutdownOnAttachedExit If the job is submitted in attached - mode, perform a best-effort cluster - shutdown when the CLI is terminated - abruptly, e.g., in response to a user - interrupt, such as typing Ctrl + C. -yat,--yarnapplicationType <arg> Set a custom application type for the application on YARN -yD <property=value> use value for given property @@ -409,8 +435,10 @@ Action "run" compiles and runs a program. -yh,--yarnhelp Help for the Yarn session CLI. -yid,--yarnapplicationId <arg> Attach to running YARN session -yj,--yarnjar <arg> Path to Flink jar file - -yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container - with optional unit (default: MB) + -yjm,--yarnjobManagerMemory <arg> Memory for JobManager Container with + optional unit (default: MB) + -ynl,--yarnnodeLabel <arg> Specify YARN node label for the YARN + application -ynm,--yarnname <arg> Set a custom name for the application on YARN -yq,--yarnquery Display available YARN resources @@ -418,17 +446,26 @@ Action "run" compiles and runs a program. -yqu,--yarnqueue <arg> Specify YARN queue. -ys,--yarnslots <arg> Number of slots per TaskManager -yt,--yarnship <arg> Ship files in the specified directory - (t for transfer), multiple options are - supported. - -ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container - with optional unit (default: MB) + (t for transfer) + -ytm,--yarntaskManagerMemory <arg> Memory per TaskManager Container with + optional unit (default: MB) -yz,--yarnzookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths for high availability mode - -ynl,--yarnnodeLabel <arg> Specify YARN node label for - the YARN application -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths for high availability mode + Options for executor mode: + -D <property=value> Generic configuration options for + execution/deployment and for the configured executor. + The available options can be found at + https://ci.apache.org/projects/flink/flink-docs-stabl + e/ops/config.html + -e,--executor <arg> The name of the executor to be used for executing the + given job, which is equivalent to the + "execution.target" config option. The currently + available executors are: "remote", "local", + "kubernetes-session", "yarn-per-job", "yarn-session". + Options for default mode: -m,--jobmanager <arg> Address of the JobManager (master) to which to connect. Use this flag to connect to a @@ -443,10 +480,10 @@ Action "info" shows the optimized execution plan of the program (JSON). Syntax: info [OPTIONS] <jar-file> <arguments> "info" action options: - -c,--class <classname> Class with the program entry point ("main()" - method or "getPlan()" method). Only needed - if the JAR file does not specify the class - in its manifest. + -c,--class <classname> Class with the program entry point + ("main()" method). Only needed if the JAR + file does not specify the class in its + manifest. -p,--parallelism <parallelism> The parallelism with which to run the program. Optional flag to override the default value specified in the @@ -457,6 +494,7 @@ Action "list" lists running and scheduled programs. Syntax: list [OPTIONS] "list" action options: + -a,--all Show all programs and their JobIDs -r,--running Show only running programs and their JobIDs -s,--scheduled Show only scheduled programs and their JobIDs Options for yarn-cluster mode: @@ -468,6 +506,18 @@ Action "list" lists running and scheduled programs. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths for high availability mode + Options for executor mode: + -D <property=value> Generic configuration options for + execution/deployment and for the configured executor. + The available options can be found at + https://ci.apache.org/projects/flink/flink-docs-stabl + e/ops/config.html + -e,--executor <arg> The name of the executor to be used for executing the + given job, which is equivalent to the + "execution.target" config option. The currently + available executors are: "remote", "local", + "kubernetes-session", "yarn-per-job", "yarn-session". + Options for default mode: -m,--jobmanager <arg> Address of the JobManager (master) to which to connect. Use this flag to connect to a @@ -498,6 +548,18 @@ Action "stop" stops a running program with a savepoint (streaming jobs only). -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths for high availability mode + Options for executor mode: + -D <property=value> Generic configuration options for + execution/deployment and for the configured executor. + The available options can be found at + https://ci.apache.org/projects/flink/flink-docs-stabl + e/ops/config.html + -e,--executor <arg> The name of the executor to be used for executing the + given job, which is equivalent to the + "execution.target" config option. The currently + available executors are: "remote", "local", + "kubernetes-session", "yarn-per-job", "yarn-session". + Options for default mode: -m,--jobmanager <arg> Address of the JobManager (master) to which to connect. Use this flag to connect to a @@ -529,6 +591,18 @@ Action "cancel" cancels a running program. -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths for high availability mode + Options for executor mode: + -D <property=value> Generic configuration options for + execution/deployment and for the configured executor. + The available options can be found at + https://ci.apache.org/projects/flink/flink-docs-stabl + e/ops/config.html + -e,--executor <arg> The name of the executor to be used for executing the + given job, which is equivalent to the + "execution.target" config option. The currently + available executors are: "remote", "local", + "kubernetes-session", "yarn-per-job", "yarn-session". + Options for default mode: -m,--jobmanager <arg> Address of the JobManager (master) to which to connect. Use this flag to connect to a @@ -554,6 +628,18 @@ Action "savepoint" triggers savepoints for a running job or disposes existing on -z,--zookeeperNamespace <arg> Namespace to create the Zookeeper sub-paths for high availability mode + Options for executor mode: + -D <property=value> Generic configuration options for + execution/deployment and for the configured executor. + The available options can be found at + https://ci.apache.org/projects/flink/flink-docs-stabl + e/ops/config.html + -e,--executor <arg> The name of the executor to be used for executing the + given job, which is equivalent to the + "execution.target" config option. The currently + available executors are: "remote", "local", + "kubernetes-session", "yarn-per-job", "yarn-session". + Options for default mode: -m,--jobmanager <arg> Address of the JobManager (master) to which to connect. Use this flag to connect to a