[flink] branch release-1.11 updated: [FLINK-18941][docs-zh] Correct typos in \docs\ops\memory\mem_setup_jobmanager.zh.md.
This is an automated email from the ASF dual-hosted git repository. xtsong pushed a commit to branch release-1.11 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.11 by this push: new 2001a7c [FLINK-18941][docs-zh] Correct typos in \docs\ops\memory\mem_setup_jobmanager.zh.md. 2001a7c is described below commit 2001a7c99539120e729a0d4a057b3142c6df169a Author: kecheng AuthorDate: Fri Aug 21 11:42:58 2020 +0800 [FLINK-18941][docs-zh] Correct typos in \docs\ops\memory\mem_setup_jobmanager.zh.md. This closes #13211. --- docs/ops/memory/mem_setup_jobmanager.zh.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/ops/memory/mem_setup_jobmanager.zh.md b/docs/ops/memory/mem_setup_jobmanager.zh.md index 455ac23..980d131 100644 --- a/docs/ops/memory/mem_setup_jobmanager.zh.md +++ b/docs/ops/memory/mem_setup_jobmanager.zh.md @@ -93,13 +93,13 @@ Flink 需要多少 *JVM 堆内存*,很大程度上取决于运行的作业数 如果遇到 JobManager 进程抛出 “OutOfMemoryError: Direct buffer memory” 的异常,可以尝试调大这项配置。 请参考[常见问题](mem_trouble.html#outofmemoryerror-direct-buffer-memory)。 -一下情况可能用到堆外内存: +以下情况可能用到堆外内存: * Flink 框架依赖(例如 Akka 的网络通信) * 在作业提交时(例如一些特殊的批处理 Source)及 Checkpoint 完成的回调函数中执行的用户代码 提示 如果同时配置了 [Flink 总内存](mem_setup.html#configure-total-memory)和 [JVM 堆内存](#configure-jvm-heap),且没有配置*堆外内存*,那么*堆外内存*的大小将会是 [Flink 总内存](mem_setup.html#configure-total-memory)减去[JVM 堆内存](#configure-jvm-heap)。 -这种情况下,*对外内存*的默认大小将不会生效。 +这种情况下,*堆外内存*的默认大小将不会生效。
[flink] branch master updated: [FLINK-18941][docs-zh] Correct typos in \docs\ops\memory\mem_setup_jobmanager.zh.md.
This is an automated email from the ASF dual-hosted git repository. xtsong pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/master by this push: new b51ca73 [FLINK-18941][docs-zh] Correct typos in \docs\ops\memory\mem_setup_jobmanager.zh.md. b51ca73 is described below commit b51ca7352f5ce9746f0fc00a395ca229cd76f9ef Author: kecheng AuthorDate: Fri Aug 21 11:42:58 2020 +0800 [FLINK-18941][docs-zh] Correct typos in \docs\ops\memory\mem_setup_jobmanager.zh.md. This closes #13211. --- docs/ops/memory/mem_setup_jobmanager.zh.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/ops/memory/mem_setup_jobmanager.zh.md b/docs/ops/memory/mem_setup_jobmanager.zh.md index 455ac23..980d131 100644 --- a/docs/ops/memory/mem_setup_jobmanager.zh.md +++ b/docs/ops/memory/mem_setup_jobmanager.zh.md @@ -93,13 +93,13 @@ Flink 需要多少 *JVM 堆内存*,很大程度上取决于运行的作业数 如果遇到 JobManager 进程抛出 “OutOfMemoryError: Direct buffer memory” 的异常,可以尝试调大这项配置。 请参考[常见问题](mem_trouble.html#outofmemoryerror-direct-buffer-memory)。 -一下情况可能用到堆外内存: +以下情况可能用到堆外内存: * Flink 框架依赖(例如 Akka 的网络通信) * 在作业提交时(例如一些特殊的批处理 Source)及 Checkpoint 完成的回调函数中执行的用户代码 提示 如果同时配置了 [Flink 总内存](mem_setup.html#configure-total-memory)和 [JVM 堆内存](#configure-jvm-heap),且没有配置*堆外内存*,那么*堆外内存*的大小将会是 [Flink 总内存](mem_setup.html#configure-total-memory)减去[JVM 堆内存](#configure-jvm-heap)。 -这种情况下,*对外内存*的默认大小将不会生效。 +这种情况下,*堆外内存*的默认大小将不会生效。
[flink] annotated tag release-1.10.2 updated (68bb8b6 -> 6a4fa0a)
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a change to annotated tag release-1.10.2 in repository https://gitbox.apache.org/repos/asf/flink.git. *** WARNING: tag release-1.10.2 was modified! *** from 68bb8b6 (commit) to 6a4fa0a (tag) tagging f59eef29a0853fd861f0f6c52b31fa45f3c26b2a (tag) length 990 bytes by Zhu Zhu on Fri Aug 21 11:23:36 2020 +0800 - Log - Apache Flink 1.10.2 -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAABAgAGBQJfPz5IAAoJEGdnSHzVBYWcRLUP/ikldzRywUDI16pQRPyTeq5k EvkrAnxWVmHeArxwuVo2bY6cYTiuPBJOqoaFmtl/QlQpRuZR4mA8/jypfbqXb9Pw wKzWtznkjFIaouzNz0JUR9FW824F04Rv1Yt06hoX43P7/M998FvkiCL8fKsUjGgf eDeWmRnYrVXR4plcxo1wklaYNH+Z4CBDHis1UqABQ9Atgg1iJAvKvfzKDAAd8WoU sg7i8Xhum5xc8h3y4FZ6jLfn3L6RDEIrvfQ3UVem00tnwVwcK2x3W51CNkibyKU/ PvvHArq3I6IMD9/xdhYpBfPKvz2rfevASp1R7CWpyQPeIlSdvrU3uJs9jzKDTmKw qSuzv8K8q0Dj0Lgg+o3FWWICuGT7kovuuX+RQitVgGLUajb36fcGIZ+ZbPCNuUYQ ljzQiEV90JV5iKGhGrEABfCHl9PypV7745eIEyv09H25pXEapiYUZ9ckL5guSZBn VezHyInVUYIxBV6eyG0dwXaXR54S74p2gBhhwcYdEi+nPi0wa2NQL4nOOqz7qfJh YKh6Pa5GteDDI2/dFzW9dZXBr1ry37K/2Ioko2Qkz0qHwyZ5iEGRH73tgZKDs4aK QQimD8iQqUhG8HQ+JXjS3bvSZL4OLLC6JmfSYKHhGgAJ3ZqUJEeqeT803ouRyK36 EPkcixv0mppka1qkOkU7 =1cjb -END PGP SIGNATURE- --- No new revisions were added by this update. Summary of changes:
[flink] annotated tag release-1.10.2 updated (68bb8b6 -> 6a4fa0a)
This is an automated email from the ASF dual-hosted git repository. zhuzh pushed a change to annotated tag release-1.10.2 in repository https://gitbox.apache.org/repos/asf/flink.git. *** WARNING: tag release-1.10.2 was modified! *** from 68bb8b6 (commit) to 6a4fa0a (tag) tagging f59eef29a0853fd861f0f6c52b31fa45f3c26b2a (tag) length 990 bytes by Zhu Zhu on Fri Aug 21 11:23:36 2020 +0800 - Log - Apache Flink 1.10.2 -BEGIN PGP SIGNATURE- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAABAgAGBQJfPz5IAAoJEGdnSHzVBYWcRLUP/ikldzRywUDI16pQRPyTeq5k EvkrAnxWVmHeArxwuVo2bY6cYTiuPBJOqoaFmtl/QlQpRuZR4mA8/jypfbqXb9Pw wKzWtznkjFIaouzNz0JUR9FW824F04Rv1Yt06hoX43P7/M998FvkiCL8fKsUjGgf eDeWmRnYrVXR4plcxo1wklaYNH+Z4CBDHis1UqABQ9Atgg1iJAvKvfzKDAAd8WoU sg7i8Xhum5xc8h3y4FZ6jLfn3L6RDEIrvfQ3UVem00tnwVwcK2x3W51CNkibyKU/ PvvHArq3I6IMD9/xdhYpBfPKvz2rfevASp1R7CWpyQPeIlSdvrU3uJs9jzKDTmKw qSuzv8K8q0Dj0Lgg+o3FWWICuGT7kovuuX+RQitVgGLUajb36fcGIZ+ZbPCNuUYQ ljzQiEV90JV5iKGhGrEABfCHl9PypV7745eIEyv09H25pXEapiYUZ9ckL5guSZBn VezHyInVUYIxBV6eyG0dwXaXR54S74p2gBhhwcYdEi+nPi0wa2NQL4nOOqz7qfJh YKh6Pa5GteDDI2/dFzW9dZXBr1ry37K/2Ioko2Qkz0qHwyZ5iEGRH73tgZKDs4aK QQimD8iQqUhG8HQ+JXjS3bvSZL4OLLC6JmfSYKHhGgAJ3ZqUJEeqeT803ouRyK36 EPkcixv0mppka1qkOkU7 =1cjb -END PGP SIGNATURE- --- No new revisions were added by this update. Summary of changes:
svn commit: r41049 - /dev/flink/flink-1.10.2-rc2/ /release/flink/flink-1.10.2/
Author: zhijiang Date: Fri Aug 21 02:34:19 2020 New Revision: 41049 Log: Release Flink 1.10.2 Added: release/flink/flink-1.10.2/ - copied from r41048, dev/flink/flink-1.10.2-rc2/ Removed: dev/flink/flink-1.10.2-rc2/
[flink] branch release-1.11 updated: [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart (#13192)
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a commit to branch release-1.11 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.11 by this push: new b6d53fb [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart (#13192) b6d53fb is described below commit b6d53fb1717023faafb79948a305b8dc1e3de2c4 Author: Hequn Cheng AuthorDate: Thu Aug 20 09:52:33 2020 +0800 [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart (#13192) --- docs/dev/python/getting-started/tutorial/index.md | 24 .../python/getting-started/tutorial/index.zh.md| 24 .../tutorial/table_api_tutorial.md}| 9 +- .../tutorial/table_api_tutorial.zh.md} | 9 +- docs/try-flink/python_table_api.md | 156 +--- docs/try-flink/python_table_api.zh.md | 158 + 6 files changed, 60 insertions(+), 320 deletions(-) diff --git a/docs/dev/python/getting-started/tutorial/index.md b/docs/dev/python/getting-started/tutorial/index.md new file mode 100644 index 000..b862506 --- /dev/null +++ b/docs/dev/python/getting-started/tutorial/index.md @@ -0,0 +1,24 @@ +--- +title: "Tutorial" +nav-id: python_tutorial +nav-parent_id: python_start +nav-pos: 20 +--- + diff --git a/docs/dev/python/getting-started/tutorial/index.zh.md b/docs/dev/python/getting-started/tutorial/index.zh.md new file mode 100644 index 000..e81b7a2 --- /dev/null +++ b/docs/dev/python/getting-started/tutorial/index.zh.md @@ -0,0 +1,24 @@ +--- +title: "教程" +nav-id: python_tutorial +nav-parent_id: python_start +nav-pos: 20 +--- + diff --git a/docs/try-flink/python_table_api.md b/docs/dev/python/getting-started/tutorial/table_api_tutorial.md similarity index 97% copy from docs/try-flink/python_table_api.md copy to docs/dev/python/getting-started/tutorial/table_api_tutorial.md index 9c8bd9c..401020c 100644 --- a/docs/try-flink/python_table_api.md +++ b/docs/dev/python/getting-started/tutorial/table_api_tutorial.md @@ -1,8 +1,7 @@ --- -title: "Python API Tutorial" -nav-title: Python API -nav-parent_id: try-flink -nav-pos: 4 +title: "Table API Tutorial" +nav-parent_id: python_tutorial +nav-pos: 20 --- -This walkthrough will quickly get you started building a pure Python Flink project. - -Please refer to the Python Table API [installation guide]({% link dev/python/getting-started/installation.md %}) on how to set up the Python execution environments. - * This will be replaced by the TOC {:toc} -## Setting up a Python Project - -You can begin by creating a Python project and installing the PyFlink package following the [installation guide]({% link dev/python/getting-started/installation.md %}#installation-of-pyflink). - -## Writing a Flink Python Table API Program - -Table API applications begin by declaring a table environment; either a `BatchTableEvironment` for batch applications or `StreamTableEnvironment` for streaming applications. -This serves as the main entry point for interacting with the Flink runtime. -It can be used for setting execution parameters such as restart strategy, default parallelism, etc. -The table config allows setting Table API specific configurations. - -{% highlight python %} -exec_env = ExecutionEnvironment.get_execution_environment() -exec_env.set_parallelism(1) -t_config = TableConfig() -t_env = BatchTableEnvironment.create(exec_env, t_config) -{% endhighlight %} - -The the table environment created, you can declare source and sink tables. - -{% highlight python %} -t_env.connect(FileSystem().path('/tmp/input')) \ -.with_format(OldCsv() - .field('word', DataTypes.STRING())) \ -.with_schema(Schema() - .field('word', DataTypes.STRING())) \ -.create_temporary_table('mySource') - -t_env.connect(FileSystem().path('/tmp/output')) \ -.with_format(OldCsv() - .field_delimiter('\t') - .field('word', DataTypes.STRING()) - .field('count', DataTypes.BIGINT())) \ -.with_schema(Schema() - .field('word', DataTypes.STRING()) - .field('count', DataTypes.BIGINT())) \ -.create_temporary_table('mySink') -{% endhighlight %} -You can also use the TableEnvironment.sql_update() method to register a source/sink table defined in DDL: -{% highlight python %} -my_source_ddl = """ -create table mySource ( -word VARCHAR -) with ( -'connector.type' = 'filesystem', -'format.type' = 'csv', -'connector.path' = '/tmp/input' -) -""" - -my_sink_ddl = """ -create table mySink ( -word VARCHAR, -`count` BIGINT -) with ( -'connector.type' = 'filesystem', -'format.type' = 'csv', -'connector.path' = '/tmp/output' -) -""" - -t_env.sql_update(my_source_ddl) -t_env.sql_update(my_sink_ddl)
[flink] branch release-1.11 updated: [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart (#13192)
This is an automated email from the ASF dual-hosted git repository. dianfu pushed a commit to branch release-1.11 in repository https://gitbox.apache.org/repos/asf/flink.git The following commit(s) were added to refs/heads/release-1.11 by this push: new b6d53fb [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart (#13192) b6d53fb is described below commit b6d53fb1717023faafb79948a305b8dc1e3de2c4 Author: Hequn Cheng AuthorDate: Thu Aug 20 09:52:33 2020 +0800 [FLINK-18912][python][docs] Add Python api tutorial under Python GettingStart (#13192) --- docs/dev/python/getting-started/tutorial/index.md | 24 .../python/getting-started/tutorial/index.zh.md| 24 .../tutorial/table_api_tutorial.md}| 9 +- .../tutorial/table_api_tutorial.zh.md} | 9 +- docs/try-flink/python_table_api.md | 156 +--- docs/try-flink/python_table_api.zh.md | 158 + 6 files changed, 60 insertions(+), 320 deletions(-) diff --git a/docs/dev/python/getting-started/tutorial/index.md b/docs/dev/python/getting-started/tutorial/index.md new file mode 100644 index 000..b862506 --- /dev/null +++ b/docs/dev/python/getting-started/tutorial/index.md @@ -0,0 +1,24 @@ +--- +title: "Tutorial" +nav-id: python_tutorial +nav-parent_id: python_start +nav-pos: 20 +--- + diff --git a/docs/dev/python/getting-started/tutorial/index.zh.md b/docs/dev/python/getting-started/tutorial/index.zh.md new file mode 100644 index 000..e81b7a2 --- /dev/null +++ b/docs/dev/python/getting-started/tutorial/index.zh.md @@ -0,0 +1,24 @@ +--- +title: "教程" +nav-id: python_tutorial +nav-parent_id: python_start +nav-pos: 20 +--- + diff --git a/docs/try-flink/python_table_api.md b/docs/dev/python/getting-started/tutorial/table_api_tutorial.md similarity index 97% copy from docs/try-flink/python_table_api.md copy to docs/dev/python/getting-started/tutorial/table_api_tutorial.md index 9c8bd9c..401020c 100644 --- a/docs/try-flink/python_table_api.md +++ b/docs/dev/python/getting-started/tutorial/table_api_tutorial.md @@ -1,8 +1,7 @@ --- -title: "Python API Tutorial" -nav-title: Python API -nav-parent_id: try-flink -nav-pos: 4 +title: "Table API Tutorial" +nav-parent_id: python_tutorial +nav-pos: 20 --- -This walkthrough will quickly get you started building a pure Python Flink project. - -Please refer to the Python Table API [installation guide]({% link dev/python/getting-started/installation.md %}) on how to set up the Python execution environments. - * This will be replaced by the TOC {:toc} -## Setting up a Python Project - -You can begin by creating a Python project and installing the PyFlink package following the [installation guide]({% link dev/python/getting-started/installation.md %}#installation-of-pyflink). - -## Writing a Flink Python Table API Program - -Table API applications begin by declaring a table environment; either a `BatchTableEvironment` for batch applications or `StreamTableEnvironment` for streaming applications. -This serves as the main entry point for interacting with the Flink runtime. -It can be used for setting execution parameters such as restart strategy, default parallelism, etc. -The table config allows setting Table API specific configurations. - -{% highlight python %} -exec_env = ExecutionEnvironment.get_execution_environment() -exec_env.set_parallelism(1) -t_config = TableConfig() -t_env = BatchTableEnvironment.create(exec_env, t_config) -{% endhighlight %} - -The the table environment created, you can declare source and sink tables. - -{% highlight python %} -t_env.connect(FileSystem().path('/tmp/input')) \ -.with_format(OldCsv() - .field('word', DataTypes.STRING())) \ -.with_schema(Schema() - .field('word', DataTypes.STRING())) \ -.create_temporary_table('mySource') - -t_env.connect(FileSystem().path('/tmp/output')) \ -.with_format(OldCsv() - .field_delimiter('\t') - .field('word', DataTypes.STRING()) - .field('count', DataTypes.BIGINT())) \ -.with_schema(Schema() - .field('word', DataTypes.STRING()) - .field('count', DataTypes.BIGINT())) \ -.create_temporary_table('mySink') -{% endhighlight %} -You can also use the TableEnvironment.sql_update() method to register a source/sink table defined in DDL: -{% highlight python %} -my_source_ddl = """ -create table mySource ( -word VARCHAR -) with ( -'connector.type' = 'filesystem', -'format.type' = 'csv', -'connector.path' = '/tmp/input' -) -""" - -my_sink_ddl = """ -create table mySink ( -word VARCHAR, -`count` BIGINT -) with ( -'connector.type' = 'filesystem', -'format.type' = 'csv', -'connector.path' = '/tmp/output' -) -""" - -t_env.sql_update(my_source_ddl) -t_env.sql_update(my_sink_ddl)
svn commit: r41047 - /dev/flink/flink-1.10.2-rc1/
Author: zhuzh Date: Thu Aug 20 15:32:42 2020 New Revision: 41047 Log: Remove old release candidates flink-1.10.2-rc1 for Apache Flink flink-1.10.2 Removed: dev/flink/flink-1.10.2-rc1/
[flink-web] 02/02: rebuild page
This is an automated email from the ASF dual-hosted git repository. rmetzger pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit 939aaac10881c831ce28bdb89ba9681a367cb4d6 Author: Robert Metzger AuthorDate: Thu Aug 20 13:43:33 2020 +0200 rebuild page --- content/blog/feed.xml | 219 +++- content/blog/index.html| 36 ++- content/blog/page10/index.html | 40 +-- content/blog/page11/index.html | 43 +-- content/blog/page12/index.html | 43 +-- content/blog/page13/index.html | 25 ++ content/blog/page2/index.html | 36 ++- content/blog/page3/index.html | 38 +-- content/blog/page4/index.html | 41 +-- content/blog/page5/index.html | 39 ++- content/blog/page6/index.html | 38 +-- content/blog/page7/index.html | 40 +-- content/blog/page8/index.html | 40 +-- content/blog/page9/index.html | 40 +-- content/img/blog/flink-docker/flink-docker.gif | Bin 0 -> 4844328 bytes content/index.html | 6 +- content/news/2020/08/20/flink-docker.html | 347 + content/zh/index.html | 6 +- 18 files changed, 766 insertions(+), 311 deletions(-) diff --git a/content/blog/feed.xml b/content/blog/feed.xml index 33f3413..807bf73 100644 --- a/content/blog/feed.xml +++ b/content/blog/feed.xml @@ -7,6 +7,98 @@ https://flink.apache.org/blog/feed.xml"; rel="self" type="application/rss+xml" /> +The State of Flink on Docker +With over 50 million downloads from Docker Hub, the Flink docker images are a very popular deployment option.
+ +The Flink community recently put some effort into improving the Docker experience for our users with the goal to reduce confusion and improve usability.
+ +Let’s quickly break down the recent improvements:
+ ++
+ +- +
+Reduce confusion: Flink used to have 2 Dockerfiles and a 3rd file maintained outside of the official repository — all with different features and varying stability. Now, we have one central place for all images: apache/flink-docker.
; + +Here, we keep all the Dockerfiles for the different releases. Check out the detailed readme of that repository for further explanation on the different branches, as well as the Flink Improvement Proposal (FLIP-111) that contains the detailed planning.
+ +The
+apache/flink-docker
repository also seeds the official Flink image on Docker Hub.- +
Improve Usability: The Dockerfiles are used for various purposes: Native Docker deployments, Flink on Kubernetes, the (unofficial) Flink helm example and t [...] + +
The new images support passing configuration variables via a
FLINK_PROPERTIES
environment variable. Users can enable default plugins with theENABLE_BUILT_IN_PLUGINS
+Looking into the future, there are already some interesting potential improvements lined up:
+ ++
+ +- Java 11 Docker images (already completed)
+- Use vanilla docker-entrypoint with flink-kubernetes (in progress)
+- History server support
+- Support for OpenShift
+How do I get started?
+ +This is a short tutorial on how to start a Flink Session Cluster with Docker.
+ +A Flink Session cluster can be used to run multiple jobs. Each job needs to be submitted to the cluster after it has been deployed. To deploy a Flink Session cluster with Docker, you need to start a
[flink-web] branch asf-site updated (e7a3a96 -> 939aaac)
This is an automated email from the ASF dual-hosted git repository. rmetzger pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git. from e7a3a96 rebuild site new e7376f0 [blog] Add post about flink on docker new 939aaac rebuild page The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: _posts/2020-08-20-flink-docker.md | 98 + content/blog/feed.xml | 219 + content/blog/index.html| 36 ++-- content/blog/page10/index.html | 40 ++-- content/blog/page11/index.html | 43 ++-- content/blog/page12/index.html | 43 ++-- content/blog/page13/index.html | 25 +++ content/blog/page2/index.html | 36 ++-- content/blog/page3/index.html | 38 ++-- content/blog/page4/index.html | 41 ++-- content/blog/page5/index.html | 39 ++-- content/blog/page6/index.html | 38 ++-- content/blog/page7/index.html | 40 ++-- content/blog/page8/index.html | 40 ++-- content/blog/page9/index.html | 40 ++-- content/img/blog/flink-docker/flink-docker.gif | Bin 0 -> 4844328 bytes content/index.html | 6 +- .../08/20/flink-docker.html} | 112 +++ content/zh/index.html | 6 +- img/blog/flink-docker/flink-docker.gif | Bin 0 -> 4844328 bytes 20 files changed, 593 insertions(+), 347 deletions(-) create mode 100644 _posts/2020-08-20-flink-docker.md create mode 100644 content/img/blog/flink-docker/flink-docker.gif copy content/news/{2015/09/03/flink-forward.html => 2020/08/20/flink-docker.html} (63%) create mode 100644 img/blog/flink-docker/flink-docker.gif
[flink-web] 01/02: [blog] Add post about flink on docker
This is an automated email from the ASF dual-hosted git repository. rmetzger pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/flink-web.git commit e7376f0885cfd598679f8c1b26faa5d2fa435900 Author: Robert Metzger AuthorDate: Tue Aug 18 21:22:41 2020 +0200 [blog] Add post about flink on docker This closes #370 --- _posts/2020-08-20-flink-docker.md | 98 + img/blog/flink-docker/flink-docker.gif | Bin 0 -> 4844328 bytes 2 files changed, 98 insertions(+) diff --git a/_posts/2020-08-20-flink-docker.md b/_posts/2020-08-20-flink-docker.md new file mode 100644 index 000..911f29e --- /dev/null +++ b/_posts/2020-08-20-flink-docker.md @@ -0,0 +1,98 @@ +--- +layout: post +title: "The State of Flink on Docker" +date: 2020-08-20T00:00:00.000Z +authors: +- rmetzger: + name: "Robert Metzger" + twitter: rmetzger_ +categories: news + +excerpt: This blog post gives an update on the recent developments of Flink's support for Docker. +--- + +With over 50 million downloads from Docker Hub, the Flink docker images are a very popular deployment option. + +The Flink community recently put some effort into improving the Docker experience for our users with the goal to reduce confusion and improve usability. + + +Let's quickly break down the recent improvements: + +- Reduce confusion: Flink used to have 2 Dockerfiles and a 3rd file maintained outside of the official repository — all with different features and varying stability. Now, we have one central place for all images: [apache/flink-docker](https://github.com/apache/flink-docker). + + Here, we keep all the Dockerfiles for the different releases. Check out the [detailed readme](https://github.com/apache/flink-docker/blob/master/README.md) of that repository for further explanation on the different branches, as well as the [Flink Improvement Proposal (FLIP-111)](https://cwiki.apache.org/confluence/display/FLINK/FLIP-111%3A+Docker+image+unification) that contains the detailed planning. + + The `apache/flink-docker` repository also seeds the [official Flink image on Docker Hub](https://hub.docker.com/_/flink). + +- Improve Usability: The Dockerfiles are used for various purposes: [Native Docker deployments](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html), [Flink on Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html), the (unofficial) [Flink helm example](https://github.com/docker-flink/examples) and the project's [internal end to end tests](https://github.com/apache/flink/tree/master/flink-end-to-end-tests). [...] + + The new images support [passing configuration variables](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#configure-options) via a `FLINK_PROPERTIES` environment variable. Users can [enable default plugins](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#using-plugins) with the `ENABLE_BUILT_IN_PLUGINS` environment variable. The images also allow loading custom jar paths and configuration files. + +Looking into the future, there are already some interesting potential improvements lined up: + +- [Java 11 Docker images](https://issues.apache.org/jira/browse/FLINK-16260) (already completed) +- [Use vanilla docker-entrypoint with flink-kubernetes](https://issues.apache.org/jira/browse/FLINK-15793) (in progress) +- [History server support](https://issues.apache.org/jira/browse/FLINK-17167) +- [Support for OpenShift](https://issues.apache.org/jira/browse/FLINK-15587) + +## How do I get started? + +This is a short tutorial on [how to start a Flink Session Cluster](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#start-a-session-cluster) with Docker. + +A *Flink Session cluster* can be used to run multiple jobs. Each job needs to be submitted to the cluster after it has been deployed. To deploy a *Flink Session cluster* with Docker, you need to start a *JobManager* container. To enable communication between the containers, we first set a required Flink configuration property and create a network: + +``` +FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager" +docker network create flink-network +``` + +Then we launch the JobManager: + +``` +docker run \ + --rm \ + --name=jobmanager \ + --network flink-network \ + -p 8081:8081 \ + --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \ + flink:1.11.1 jobmanager +``` +and one or more *TaskManager* containers: + +``` +docker run \ + --rm \ + --name=taskmanager \ + --network flink-network \ + --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \ + flink:1.11.1 taskmanager +``` + +You now have a fully functional Flink cluster running! You can access the the web front end here: [localhost:8081](http://localhost:8081/). + +Let's now submit one of Flink's example j
[flink] branch master updated (2fc0899 -> f8ce30a)
This is an automated email from the ASF dual-hosted git repository. pnowojski pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/flink.git. from 2fc0899 [FLINK-18995][hive] Some Hive functions fail because they need to access SessionState add bdc0550 [FLINK-18962][checkpointing][task] Change log level from DEBUG fto INFO or async part errors add b995359 [FLINK-18962][checkpointing] Log checkpoint decline reason add f8ce30a [hotfix][tests] e2e/common.sh: add -type parameter to find No new revisions were added by this update. Summary of changes: flink-end-to-end-tests/test-scripts/common.sh | 7 --- .../apache/flink/runtime/checkpoint/CheckpointCoordinator.java | 3 ++- .../flink/streaming/runtime/tasks/AsyncCheckpointRunnable.java | 10 -- 3 files changed, 10 insertions(+), 10 deletions(-)