caicancai commented on code in PR #810:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/810#discussion_r1584453635


##########
docs/content.zh/docs/concepts/overview.md:
##########
@@ -24,78 +24,96 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Overview
-Flink Kubernetes Operator acts as a control plane to manage the complete 
deployment lifecycle of Apache Flink applications. Although Flink’s native 
Kubernetes integration already allows you to directly deploy Flink applications 
on a running Kubernetes(k8s) cluster, [custom 
resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
 and the [operator 
pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) have 
also become central to a Kubernetes native deployment experience.
-
-Flink Kubernetes Operator aims to capture the responsibilities of a human 
operator who is managing Flink deployments. Human operators have deep knowledge 
of how Flink deployments ought to behave, how to start clusters, how to deploy 
jobs, how to upgrade them and how to react if there are problems. The main goal 
of the operator is the automation of these activities, which cannot be achieved 
through the Flink native integration alone.
-
-## Features
-### Core
-- Fully-automated [Job Lifecycle Management]({{< ref 
"docs/custom-resource/job-management" >}})
-  - Running, suspending and deleting applications
-  - Stateful and stateless application upgrades
-  - Triggering and managing savepoints
-  - Handling errors, rolling-back broken upgrades
-- Multiple Flink version support: v1.15, v1.16, v1.17, v1.18
+<a name="overview"></a>
+
+# 概述
+Flink Kubernetes Operator 扮演控制平面的角色,用于管理 Apache Flink 应用程序的完整部署生命周期。尽管 Flink 
的原生 Kubernetes 集成已经允许你直接在运行的 Kubernetes(k8s) 集群上部署 Flink 应用程序,但 
[自定义资源](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
 和 [operator 
模式](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) 也已成为 
Kubernetes 本地部署体验的核心。
+
+Flink Kubernetes Operator 旨在承担管理 Flink deployments,这是 human operator 的职责。Human 
Operators 对 Flink 部署应该如何运行、如何启动集群、如何部署作业、如何升级作业以及出现问题时如何反应有着深入的了解。Operator 
的主要目标是使这些活动自动化,而这无法仅通过 Flink 原生集成来实现。
+
+
+<a name="features"></a>
+
+## 特征
+
+<a name="core"></a>
+
+### 核心
+- 全自动 [Job Lifecycle Management]({{< ref "docs/custom-resource/job-management" 
>}})
+  - 运行、暂停和删除应用程序
+  - 有状态和无状态应用程序升级
+  - 保存点的触发和管理
+  - 处理错误,回滚失败的升级
+  - 多 Flink 版本支持:v1.15, v1.16, v1.17, v1.18
 - [Deployment Modes]({{< ref 
"docs/custom-resource/overview#application-deployments" >}}):
-  - Application cluster
-  - Session cluster
-  - Session job
+  - 应用程序集群
+  - 会话集群
+  - 会话作业
 - Built-in [High 
Availability](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/)
   
 - Extensible framework
   - [Custom validators]({{< ref 
"docs/operations/plugins#custom-flink-resource-validators" >}})
   - [Custom resource listeners]({{< ref 
"docs/operations/plugins#custom-flink-resource-listeners" >}})  
 - Advanced [Configuration]({{< ref "docs/operations/configuration" >}}) 
management
-  - Default configurations with dynamic updates
-  - Per job configuration
-  - Environment variables
+  - 默认配置与动态更新
+  - 作业配置
+  - 任务管理器配置
 - POD augmentation via [Pod Templates]({{< ref 
"docs/custom-resource/pod-template" >}})
-  - Native Kubernetes POD definitions
-  - Layering (Base/JobManager/TaskManager overrides)
+  - 原生Kubernetes POD定义
+  - 用于自定义容器和资源
 - [Job Autoscaler]({{< ref "docs/custom-resource/autoscaler" >}})
-  - Collect lag and utilization metrics
-  - Scale job vertices to the ideal parallelism
-  - Scale up and down as the load changes
-### Operations
+  - 收集延迟和利用率指标
+  - 将作业顶点调整到合适的并行度
+  - 根据负载的变化进行扩展和缩减
+
+<a name="operations"></a>
+
+### 运营
 - Operator [Metrics]({{< ref "docs/operations/metrics-logging#metrics" >}})
-  - Utilizes the well-established [Flink Metric 
System](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics)
-  - Pluggable metrics reporters
-  - Detailed resources and kubernetes api access metrics
+  - 使用成熟的 [Flink Metric 
System](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics)
+  - 可插拔的指标报告器
+  - 详细的资源和 kubernetes api 访问指标
 - Fully-customizable [Logging]({{< ref 
"docs/operations/metrics-logging#logging" >}})
-  - Default log configuration
-  - Per job log configuration
-  - Sidecar based log forwarders
-- Flink Web UI and REST Endpoint Access
-  - Fully supported Flink Native Kubernetes [service expose 
types](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#accessing-flinks-web-ui)
-  - Dynamic [Ingress templates]({{< ref "docs/operations/ingress" >}})
+  - 默认日志配置
+  - 每个作业日志配置
+  - 基于 sidecar 的日志转发器
+- Flink Web UI 和 REST 端点访问
+  - 完整支持 Flink 原生 Kubernetes 
[服务暴露类型](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#accessing-flinks-web-ui)
+  - 通过 [Ingress 模板]({{< ref "docs/operations/ingress" >}}) 动态暴露服务
 - [Helm based installation]({{< ref "docs/operations/helm" >}})
-  - Automated [RBAC configuration]({{< ref "docs/operations/rbac" >}})
-  - Advanced customization techniques
-- Up-to-date public repositories
+  - 一键部署
+  - 高级定制化技术
+- 最新的公共存储库
   - GitHub Container Registry 
[ghcr.io/apache/flink-kubernetes-operator](http://ghcr.io/apache/flink-kubernetes-operator)
   - DockerHub 
[https://hub.docker.com/r/apache/flink-kubernetes-operator](https://hub.docker.com/r/apache/flink-kubernetes-operator)
 
-## Built-in Examples
+<a name="built-in-examples"></a>
+
+## 内置示例
 
-The operator project comes with a wide variety of built in examples to show 
you how to use the operator functionality.
-The examples are maintained as part of the operator repo and can be found 
[here](https://github.com/apache/flink-kubernetes-operator/tree/main/examples).
+Operator 项目提供了各种内置示例,以展示如何使用 Operator 功能。这些示例作为 operator 
存储库的一部分进行维护,可以在[这里](https://github.com/apache/flink-kubernetes-operator/tree/main/examples).
 
-**What is covered:**
+**涵盖以下:**
+ - 集群和会话部署
+ - 检查点和保存点
+ - Java, SQL 和 Python Flink 作业
+ - 作业配置和任务管理器配置
+ - 入口、日志记录和指标配置
+ - 使用 Kustomize 的高级 operator deployment 技术
+ - 更多...
 
- - Application, Session and SessionJob submission
- - Checkpointing and HA configuration
- - Java, SQL and Python Flink jobs
- - Ingress, logging and metrics configuration
- - Advanced operator deployment techniques using Kustomize
- - And some more...
+<a name="known-issues-and-limitations"></a>
 
-## Known Issues & Limitations
+## 已知问题和限制
 
-### JobManager High-availability
-The Operator supports both [Kubernetes HA 
Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/)
 and [Zookeeper HA 
Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/zookeeper_ha/)
 for providing High-availability for Flink jobs. The HA solution can benefit 
form using additional [Standby 
replicas](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/),
 it will result in a faster recovery time, but Flink jobs will still restart 
when the Leader JobManager goes down.
+<a name="jobManager-high-availability"></a>
 
-### JobResultStore Resource Leak
-To mitigate the impact of 
[FLINK-27569](https://issues.apache.org/jira/browse/FLINK-27569) the operator 
introduced a workaround 
[FLINK-27573](https://issues.apache.org/jira/browse/FLINK-27573) by setting 
`job-result-store.delete-on-commit=false` and a unique value for 
`job-result-store.storage-path` for every cluster launch. The storage path for 
older runs must be cleaned up manually, keeping the latest directory always:
+### JobManager 高可用性
+Operator 为 Flink 作业提供具有高可用性的 [Kubernetes HA 
服务](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/)和
 [Zookeeper HA 
服务](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/zookeeper_ha/)。HA
 解决方案可以从使用额外的 
[备用副本](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/)中受益,这将可以拥有更快的恢复时间,但是当
 Leader JobManager 停机时,Flink 作业仍将重新启动。
+
+<a name="jobResultStore-resource-leak"></a>
+
+### JobResultStore 资源泄漏
+为了解决 [FLINK-27569](https://issues.apache.org/jira/browse/FLINK-27569) 
的影响,Operator 通过设置 `job-result-store.delete-on-commit=false` 以及为每次集群启动设置唯一值的 
`job-result-store.storage-path` 引入了解决方法 
[FLINK-27573](https://issues.apache.org/jira/browse/FLINK-27573)。但这必须手动清理旧运行的存储路径,并始终保留最新目录:

Review Comment:
   done



##########
docs/content.zh/docs/concepts/overview.md:
##########
@@ -104,6 +122,7 @@ drwxr-xr-x 2 9999 9999 60 May 12 09:46 
a6031ec7-ab3e-4b30-ba77-6498e58e6b7f
 drwxr-xr-x 2 9999 9999 60 May 11 15:11 b6fb2a9c-d1cd-4e65-a9a1-e825c4b47543
 ```
 
-### AuditUtils can log sensitive information present in the custom resources
-As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this.
-Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
+<a 
name="auditUtils-can-log-sensitive-information-present-in-the-custom-resources"></a>
+
+### AuditUtils 可以记录自定义资源中的敏感信息
+按照 [FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) 中的报告,当 
Flink 自定义资源发生变化时,Operator 会记录更改,其中可能包含敏感信息。我们建议在运行时将机密信息注入到 Flink 
容器中以减轻这种情况。请注意,任何可以访问自定义资源的人员已经可以访问可能涉及的敏感信息,但是只能访问日志的人员现在也可以看到这些信息。我们计划在后续版本中向 
AuditUtils 引入遮蔽规则以改进此问题。

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to