zentol commented on a change in pull request #12723:
URL: https://github.com/apache/flink/pull/12723#discussion_r442825849



##########
File path: docs/concepts/glossary.md
##########
@@ -106,10 +105,10 @@ Logical graphs are also often referred to as *dataflow 
graphs*.
 Managed State describes application state which has been registered with the 
framework. For
 Managed State, Apache Flink will take care about persistence and rescaling 
among other things.
 
-#### Flink Master
+#### Flink JobManager

Review comment:
       should be moved closer to JobManager to adhere to the entry order 
(alphabetic, excluding Flink prefixes).

##########
File path: docs/learn-flink/datastream_api.zh.md
##########
@@ -190,7 +190,7 @@ and several pub-sub systems.
 
 ### 调试
 
-在生产中,应用程序将在远程集群或一组容器中运行。如果集群或容器挂了,这就属于远程失败。Flink Master 和 Task Manager 
日志对于调试此类故障非常有用,但是更简单的是 Flink 支持在 IDE 内部进行本地调试。你可以设置断点,检查局部变量,并逐行执行代码。如果想了解 
Flink 的工作原理和内部细节,查看 Flink 源码也是非常好的方法。
+在生产中,应用程序将在远程集群或一组容器中运行。如果集群或容器挂了,这就属于远程失败。JobManager 和 Task Manager 
日志对于调试此类故障非常有用,但是更简单的是 Flink 支持在 IDE 内部进行本地调试。你可以设置断点,检查局部变量,并逐行执行代码。如果想了解 
Flink 的工作原理和内部细节,查看 Flink 源码也是非常好的方法。

Review comment:
       ```suggestion
   在生产中,应用程序将在远程集群或一组容器中运行。如果集群或容器挂了,这就属于远程失败。JobManager 和 TaskManager 
日志对于调试此类故障非常有用,但是更简单的是 Flink 支持在 IDE 内部进行本地调试。你可以设置断点,检查局部变量,并逐行执行代码。如果想了解 
Flink 的工作原理和内部细节,查看 Flink 源码也是非常好的方法。
   ```

##########
File path: docs/ops/deployment/docker.md
##########
@@ -395,13 +395,13 @@ The next chapters show examples of configuration files to 
run Flink.
     flink run -d -c ${JOB_CLASS_NAME} /job.jar
     ```
 
-  * or copy the JAR to the *Flink Master* container and submit the job using 
the [CLI](..//cli.html) from there, for example:
+  * or copy the JAR to the *JobManager* container and submit the job using the 
[CLI](..//cli.html) from there, for example:
 
     ```sh
     JOB_CLASS_NAME="com.job.ClassName"
-    MASTER_CONTAINER=$(docker ps --filter name=jobmanager --format={% raw 
%}{{.ID}}{% endraw %}))
-    docker cp path/to/jar "${MASTER_CONTAINER}":/job.jar
-    docker exec -t -i "${MASTER_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} 
/job.jar
+    JM_CONTAINER=$(docker ps --filter name=jobmanager --format={% raw 
%}{{.ID}}{% endraw %}))

Review comment:
       It is quite unfortunate that the abbreviation `JM` is now ambiguous.

##########
File path: docs/Gemfile.lock
##########
@@ -4,7 +4,7 @@ GEM
     addressable (2.7.0)
       public_suffix (>= 2.0.2, < 5.0)
     colorator (1.1.0)
-    concurrent-ruby (1.1.5)

Review comment:
       Did you verify that the build process still works on buildbot? 
(instructions can be found in the 
[wiki](https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation))

##########
File path: docs/ops/memory/mem_trouble.md
##########
@@ -48,7 +48,7 @@ The exception usually indicates that the JVM *direct memory* 
limit is too small
 Check whether user code or other external dependencies use the JVM *direct 
memory* and that it is properly accounted for.
 You can try to increase its limit by adjusting direct off-heap memory.
 See also how to configure off-heap memory for 
[TaskManagers](mem_setup_tm.html#configure-off-heap-memory-direct-or-native),
-[Masters](mem_setup_master.html#configure-off-heap-memory) and the [JVM 
arguments](mem_setup.html#jvm-parameters) which Flink sets.
+[Masters]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-off-heap-memory) and the [JVM 
arguments](mem_setup.html#jvm-parameters) which Flink sets.

Review comment:
       ```suggestion
   [JobManagers]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-off-heap-memory) and the [JVM 
arguments](mem_setup.html#jvm-parameters) which Flink sets.
   ```

##########
File path: docs/ops/memory/mem_setup_tm.md
##########
@@ -32,7 +32,7 @@ The further described memory configuration is applicable 
starting with the relea
 from earlier versions, check the [migration guide](mem_migration.html) because 
many changes were introduced with the *1.10* release.
 
 <span class="label label-info">Note</span> This memory setup guide is relevant 
<strong>only for TaskManagers</strong>!
-The TaskManager memory components have a similar but more sophisticated 
structure compared to the [memory model of the Master 
process](mem_setup_master.html).
+The TaskManager memory components have a similar but more sophisticated 
structure compared to the [memory model of the Master process]({% link 
ops/memory/mem_setup_jobmanager.md %}).

Review comment:
       ```suggestion
   The TaskManager memory components have a similar but more sophisticated 
structure compared to the [memory model of the JobManager process]({% link 
ops/memory/mem_setup_jobmanager.md %}).
   ```

##########
File path: docs/ops/security-ssl.zh.md
##########
@@ -129,7 +129,7 @@ security.ssl.internal.truststore: /path/to/file.truststore
 security.ssl.internal.truststore-password: truststore_password
 {% endhighlight %}
 
-When using a certificate that is not self-signed, but signed by a CA, you need 
to use certificate pinning to allow only a
+When using a certificate that is not self-signed, but signed by a CA, you need 
to use certificate pinning to allow only a 

Review comment:
       what changed here?

##########
File path: docs/ops/memory/mem_migration.md
##########
@@ -23,7 +23,7 @@ under the License.
 -->
 
 The memory setup has changed a lot with the *1.10* release for 
[TaskManagers](mem_setup_tm.html) and with the *1.11*
-release for [Masters](mem_setup_master.html). Many configuration options were 
removed or their semantics changed.
+release for [Masters]({% link ops/memory/mem_setup_jobmanager.md %}). Many 
configuration options were removed or their semantics changed.

Review comment:
       ```suggestion
   release for [JobManagers]({% link ops/memory/mem_setup_jobmanager.md %}). 
Many configuration options were removed or their semantics changed.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to