wuchong commented on a change in pull request #8200: [FLINK-11614] [docs-zh] 
Translate the "Configuring Dependencies" page into Chinese
URL: https://github.com/apache/flink/pull/8200#discussion_r278376961
 
 

 ##########
 File path: docs/dev/projectsetup/dependencies.zh.md
 ##########
 @@ -132,63 +113,45 @@ Below is an example adding the connector for Kafka 0.10 
as a dependency (Maven s
 </dependency>
 {% endhighlight %}
 
-We recommend to package the application code and all its required dependencies 
into one *jar-with-dependencies* which
-we refer to as the *application jar*. The application jar can be submitted to 
an already running Flink cluster,
-or added to a Flink application container image.
-
-Projects created from the [Java Project Template]({{ site.baseurl 
}}/dev/projectsetup/java_api_quickstart.html) or
-[Scala Project Template]({{ site.baseurl 
}}/dev/projectsetup/scala_api_quickstart.html) are configured to automatically 
include
-the application dependencies into the application jar when running `mvn clean 
package`. For projects that are
-not set up from those templates, we recommend to add the Maven Shade Plugin 
(as listed in the Appendix below)
-to build the application jar with all required dependencies.
+我们建议将应用程序代码及其所有需要的依赖项打包到一个 *jar-with-dependencies* 的 jar 包中。
+这个打包好的应用 jar 可以提交到已经运行的 Flink 集群中,或者添加到 Flink 应用容器镜像中。
+ 
+通过[Java 项目模板]({{ site.baseurl }}/dev/projectsetup/java_api_quickstart_zh.html) 
或者
+[Scala 项目模板]({{ site.baseurl }}/dev/projectsetup/scala_api_quickstart_zh.html) 
创建的应用,
+当使用命令 `mvn clean package` 打包的时候会自动将应用依赖类库打包进应用 jar 包。
+对于不是通过上面模板创建的应用,我们推荐添加 Maven Shade Plugin 去构建应用。(下面的附录会给出具体配置)
 
-**Important:** For Maven (and other build tools) to correctly package the 
dependencies into the application jar,
-these application dependencies must be specified in scope *compile* (unlike 
the core dependencies, which
-must be specified in scope *provided*).
+**注意:** 要使 Maven(以及其他构建工具)正确地将依赖项打包到应用程序 jar 中,必须将这些依赖项的作用域设置为 *compile* 
(与核心依赖项不同,后者作用域应该设置为 *provided* )。
 
+## Scala 版本
 
-## Scala Versions
+Scala 版本(2.10、2.11、2.12等)互相是不兼容的。因此,依赖 Scala 2.11 的 Flink 环境是不可以运行依赖 Scala 
2.12 应用的。
 
-Scala versions (2.10, 2.11, 2.12, etc.) are not binary compatible with one 
another.
-For that reason, Flink for Scala 2.11 cannot be used with an application that 
uses
-Scala 2.12.
+所有依赖 Scala 的 Flink 类库都以它们依赖的 Scala 版本为后缀,例如 `flink-streaming-scala_2.11`。
 
-All Flink dependencies that (transitively) depend on Scala are suffixed with 
the
-Scala version that they are built for, for example 
`flink-streaming-scala_2.11`.
+只使用 Java 的开发人员可以选择任何 Scala 版本,Scala 开发人员需要选择与其应用程序相匹配的 Scala 版本。
 
-Developers that only use Java can pick any Scala version, Scala developers 
need to
-pick the Scala version that matches their application's Scala version.
+对于指定的 Scala 版本如何构建 Flink 应用可以参考 [构建指南]({{ site.baseurl 
}}/flinkDev/building_zh.html#scala-versions) 
 
-Please refer to the [build guide]({{ site.baseurl 
}}/flinkDev/building.html#scala-versions)
-for details on how to build Flink for a specific Scala version.
+## Hadoop 版本
 
-## Hadoop Dependencies
+**一般规则:永远不需要将 Hadoop 依赖项直接添加到你的应用程序中** 
+*(唯一的例外是使用 Flink 的 Hadoop 兼容包装器来处理 Hadoop 格式的输入/输出时)*
 
-**General rule: It should never be necessary to add Hadoop dependencies 
directly to your application.**
-*(The only exception being when using existing Hadoop input-/output formats 
with Flink's Hadoop compatibility wrappers)*
+如果你想要在 Flink 应用中使用 Hadoop,你需要使用包含 Hadoop 依赖的 Flink,而非将 Hadoop 作为应用依赖进行添加。
+请参考[Hadoop 构建指南]({{ site.baseurl }}/ops/deployment/hadoop_zh.html)
 
-If you want to use Flink with Hadoop, you need to have a Flink setup that 
includes the Hadoop dependencies, rather than
-adding Hadoop as an application dependency. Please refer to the [Hadoop Setup 
Guide]({{ site.baseurl }}/ops/deployment/hadoop.html)
-for details.
+这样设计是出于两个主要原因:
+  - 可能在用户程序启动之前, 一些 Hadoop 操作就已经发生在 Flink 核心中,比如为 checkpoint 设置 HDFS 路径,通过 
Hadoop's Kerberos tokens 进行权限认证以及进行 YARN 部署等。
 
 Review comment:
   ```suggestion
     - 可能在用户程序启动之前,一些 Hadoop 交互操作就已经发生在 Flink 核心中了,比如为 checkpoint 设置 HDFS 路径,通过 
Hadoop's Kerberos tokens 进行权限认证以及进行 YARN 部署等。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to