[flink] branch release-1.13 updated: [FLINK-22796][doc] Update mem_setup_tm documentation

2021-05-31 Thread xtsong
This is an automated email from the ASF dual-hosted git repository.

xtsong pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new 5332c98  [FLINK-22796][doc] Update mem_setup_tm documentation
5332c98 is described below

commit 5332c9898a1c891c2f97867304f912ae8d339c60
Author: Tony Wei 
AuthorDate: Fri May 28 15:15:24 2021 +0800

[FLINK-22796][doc] Update mem_setup_tm documentation

This closes #16016
---
 docs/content.zh/docs/deployment/memory/mem_setup_tm.md | 9 +
 docs/content/docs/deployment/memory/mem_setup_tm.md| 9 +
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/docs/content.zh/docs/deployment/memory/mem_setup_tm.md 
b/docs/content.zh/docs/deployment/memory/mem_setup_tm.md
index db7152f..6fdae86 100644
--- a/docs/content.zh/docs/deployment/memory/mem_setup_tm.md
+++ b/docs/content.zh/docs/deployment/memory/mem_setup_tm.md
@@ -83,7 +83,7 @@ Flink 会根据默认值或其他配置参数自动调整剩余内存部分的
 *托管内存*是由 Flink 负责分配和管理的本地(堆外)内存。
 以下场景需要使用*托管内存*:
 * 流处理作业中用于 [RocksDB State Backend]({{< ref "docs/ops/state/state_backends" 
>}}#the-rocksdbstatebackend)。
-* [批处理作业]({{< ref "docs/dev/dataset/overview" >}})中用于排序、哈希表及缓存中间结果。
+* 流处理和批处理作业中用于排序、哈希表及缓存中间结果。
 * 流处理和批处理作业中用于[在 Python 进程中执行用户自定义函数]({{< ref 
"docs/dev/python/table/udfs/python_udfs" >}})。
 
 可以通过以下两种范式指定*托管内存*的大小:
@@ -102,14 +102,15 @@ Flink 会根据默认值或其他配置参数自动调整剩余内存部分的
 对于包含不同种类的托管内存消费者的作业,可以进一步控制托管内存如何在消费者之间分配。
 通过 [`taskmanager.memory.managed.consumer-weights`]({{< ref 
"docs/deployment/config" >}}#taskmanager-memory-managed-consumer-weights) 
可以为每一种类型的消费者指定一个权重,Flink 会按照权重的比例进行内存分配。
 目前支持的消费者类型包括:
-* `DATAPROC`:用于流处理中的 RocksDB State Backend 和批处理中的内置算法。
+* `OPERATOR`: 用于内置算法。
+* `STATE_BACKEND`: 用于流处理中的 RocksDB State Backend。
 * `PYTHON`:用户 Python 进程。
 
-例如,一个流处理作业同时使用到了 RocksDB State Backend 和 Python UDF,消费者权重设置为 
`DATAPROC:70,PYTHON:30`,那么 Flink 会将 `70%` 的托管内存用于 RocksDB State Backend,`30%` 
留给 Python 进程。
+例如,一个流处理作业同时使用到了 RocksDB State Backend 和 Python UDF,消费者权重设置为 
`STATE_BACKEND:70,PYTHON:30`,那么 Flink 会将 `70%` 的托管内存用于 RocksDB State 
Backend,`30%` 留给 Python 进程。
 
 提示
 只有作业中包含某种类型的消费者时,Flink 才会为该类型分配托管内存。
-例如,一个流处理作业使用 Heap State Backend 和 Python UDF,消费者权重设置为 
`DATAPROC:70,PYTHON:30`,那么 Flink 会将全部托管内存用于 Python 进程,因为 Heap State Backend 
不使用托管内存。
+例如,一个流处理作业使用 Heap State Backend 和 Python UDF,消费者权重设置为 
`STATE_BACKEND:70,PYTHON:30`,那么 Flink 会将全部托管内存用于 Python 进程,因为 Heap State 
Backend 不使用托管内存。
 
 提示
 对于未出现在消费者权重中的类型,Flink 将不会为其分配托管内存。
diff --git a/docs/content/docs/deployment/memory/mem_setup_tm.md 
b/docs/content/docs/deployment/memory/mem_setup_tm.md
index 79e0737..e4eedb7 100644
--- a/docs/content/docs/deployment/memory/mem_setup_tm.md
+++ b/docs/content/docs/deployment/memory/mem_setup_tm.md
@@ -81,7 +81,7 @@ It will be added to the JVM Heap size and will be dedicated 
to Flink’s operato
 
 *Managed memory* is managed by Flink and is allocated as native memory 
(off-heap). The following workloads use *managed memory*:
 * Streaming jobs can use it for [RocksDB state backend]({{< ref 
"docs/ops/state/state_backends" >}}#the-rocksdbstatebackend).
-* [Batch jobs]({{< ref "docs/dev/dataset/overview" >}}) can use it for 
sorting, hash tables, caching of intermediate results.
+* Both streaming and batch jobs can use it for sorting, hash tables, caching 
of intermediate results.
 * Both streaming and batch jobs can use it for executing [User Defined 
Functions in Python processes]({{< ref "docs/dev/python/table/udfs/python_udfs" 
>}}).
 
 The size of *managed memory* can be
@@ -98,13 +98,14 @@ See also [how to configure memory for state backends]({{< 
ref "docs/deployment/m
 If your job contains multiple types of managed memory consumers, you can also 
control how managed memory should be shared across these types.
 The configuration option [`taskmanager.memory.managed.consumer-weights`]({{< 
ref "docs/deployment/config" >}}#taskmanager-memory-managed-consumer-weights) 
allows you to set a weight for each type, to which Flink will reserve managed 
memory proportionally.
 Valid consumer types are:
-* `DATAPROC`: for RocksDB state backend in streaming and built-in algorithms 
in batch.
+* `OPERATOR`: for built-in algorithms.
+* `STATE_BACKEND`: for RocksDB state backend in streaming
 * `PYTHON`: for Python processes.
 
-E.g. if a streaming job uses both RocksDB state backend and Python UDFs, and 
the consumer weights are configured as `DATAPROC:70,PYTHON:30`, Flink will 
reserve `70%` of the total managed memory for RocksDB state backend and `30%` 
for Python processes.
+E.g. if a streaming job uses both RocksDB state backend and Python UDFs, and 
the consumer weights are configured as `STATE_BACKEND:70,PYTHON:30`, Flink will 
reserve `70%` of the total managed memory for RocksDB state backend and `30%` 
for Python processes.
 
 For each type, Flink reserves managed memory only if the job contains managed 

[flink] branch master updated (997fa27 -> 59fd4c6)

2021-05-31 Thread xtsong
This is an automated email from the ASF dual-hosted git repository.

xtsong pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 997fa27  [FLINK-22689][doc] Fix Row-Based Operations Example in Table 
API documentation (#16036)
 add 59fd4c6  [FLINK-22796][doc] Update mem_setup_tm documentation

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/deployment/memory/mem_setup_tm.md | 9 +
 docs/content/docs/deployment/memory/mem_setup_tm.md| 9 +
 2 files changed, 10 insertions(+), 8 deletions(-)


[flink] branch release-1.13 updated: [FLINK-22689][doc] Fix Row-Based Operations Example in Table API documentation (#16036)

2021-05-31 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new dab3cf2  [FLINK-22689][doc] Fix Row-Based Operations Example in Table 
API documentation (#16036)
dab3cf2 is described below

commit dab3cf240ada16aa754f7855dff4a364abf91e5f
Author: paul8263 
AuthorDate: Tue Jun 1 12:09:05 2021 +0800

[FLINK-22689][doc] Fix Row-Based Operations Example in Table API 
documentation (#16036)
---
 docs/content/docs/dev/table/tableApi.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/content/docs/dev/table/tableApi.md 
b/docs/content/docs/dev/table/tableApi.md
index 4ca942b..0115aa0 100644
--- a/docs/content/docs/dev/table/tableApi.md
+++ b/docs/content/docs/dev/table/tableApi.md
@@ -2166,7 +2166,7 @@ public class MyMapFunction extends ScalarFunction {
 
 @Override
 public TypeInformation getResultType(Class[] signature) {
-return Types.ROW(Types.STRING(), Types.STRING());
+return Types.ROW(Types.STRING, Types.STRING);
 }
 }
 
@@ -2174,7 +2174,7 @@ ScalarFunction func = new MyMapFunction();
 tableEnv.registerFunction("func", func);
 
 Table table = input
-  .map(call("func", $("c")).as("a", "b"));
+  .map(call("func", $("c"))).as("a", "b");
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
@@ -2249,7 +2249,7 @@ public class MyFlatMapFunction extends TableFunction 
{
 
 @Override
 public TypeInformation getResultType() {
-return Types.ROW(Types.STRING(), Types.INT());
+return Types.ROW(Types.STRING, Types.INT);
 }
 }
 
@@ -2257,7 +2257,7 @@ TableFunction func = new MyFlatMapFunction();
 tableEnv.registerFunction("func", func);
 
 Table table = input
-  .flatMap(call("func", $("c")).as("a", "b"));
+  .flatMap(call("func", $("c"))).as("a", "b");
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}


[flink] branch master updated: [FLINK-22689][doc] Fix Row-Based Operations Example in Table API documentation (#16036)

2021-05-31 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 997fa27  [FLINK-22689][doc] Fix Row-Based Operations Example in Table 
API documentation (#16036)
997fa27 is described below

commit 997fa27e462c612f70401939c0fa9b217490dd73
Author: paul8263 
AuthorDate: Tue Jun 1 12:09:05 2021 +0800

[FLINK-22689][doc] Fix Row-Based Operations Example in Table API 
documentation (#16036)
---
 docs/content/docs/dev/table/tableApi.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/content/docs/dev/table/tableApi.md 
b/docs/content/docs/dev/table/tableApi.md
index bd98363..6bdf573 100644
--- a/docs/content/docs/dev/table/tableApi.md
+++ b/docs/content/docs/dev/table/tableApi.md
@@ -2165,7 +2165,7 @@ public class MyMapFunction extends ScalarFunction {
 
 @Override
 public TypeInformation getResultType(Class[] signature) {
-return Types.ROW(Types.STRING(), Types.STRING());
+return Types.ROW(Types.STRING, Types.STRING);
 }
 }
 
@@ -2173,7 +2173,7 @@ ScalarFunction func = new MyMapFunction();
 tableEnv.registerFunction("func", func);
 
 Table table = input
-  .map(call("func", $("c")).as("a", "b"));
+  .map(call("func", $("c"))).as("a", "b");
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
@@ -2248,7 +2248,7 @@ public class MyFlatMapFunction extends TableFunction 
{
 
 @Override
 public TypeInformation getResultType() {
-return Types.ROW(Types.STRING(), Types.INT());
+return Types.ROW(Types.STRING, Types.INT);
 }
 }
 
@@ -2256,7 +2256,7 @@ TableFunction func = new MyFlatMapFunction();
 tableEnv.registerFunction("func", func);
 
 Table table = input
-  .flatMap(call("func", $("c")).as("a", "b"));
+  .flatMap(call("func", $("c"))).as("a", "b");
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}


[flink] branch master updated: [FLINK-21741][sql-client] Support SHOW JARS statement in SQL Client (#16010)

2021-05-31 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 6668d3b  [FLINK-21741][sql-client] Support SHOW JARS statement in SQL 
Client (#16010)
6668d3b is described below

commit 6668d3be2edaea9a775f00336abd5e2fa8929a34
Author: SteNicholas 
AuthorDate: Tue Jun 1 12:07:47 2021 +0800

[FLINK-21741][sql-client] Support SHOW JARS statement in SQL Client (#16010)
---
 .../apache/flink/table/client/cli/CliClient.java   |  8 
 .../flink/table/client/gateway/Executor.java   |  3 ++
 .../client/gateway/context/SessionContext.java |  6 +++
 .../table/client/gateway/local/LocalExecutor.java  |  6 +++
 .../flink/table/client/cli/CliClientTest.java  |  5 ++
 .../flink/table/client/cli/CliResultViewTest.java  |  5 ++
 .../flink/table/client/cli/TestingExecutor.java|  5 ++
 .../client/gateway/context/SessionContextTest.java |  2 +
 .../src/test/resources/sql/function.q  |  4 ++
 .../flink-sql-client/src/test/resources/sql/set.q  |  8 
 .../src/main/codegen/data/Parser.tdd   |  4 ++
 .../src/main/codegen/includes/parserImpls.ftl  | 14 ++
 .../parser/hive/FlinkHiveSqlParserImplTest.java|  5 ++
 .../src/main/codegen/data/Parser.tdd   |  4 ++
 .../src/main/codegen/includes/parserImpls.ftl  | 14 ++
 .../apache/flink/sql/parser/dql/SqlShowJars.java   | 56 ++
 .../flink/sql/parser/FlinkSqlParserImplTest.java   |  5 ++
 .../operations/command/ShowJarsOperation.java  | 30 
 .../operations/SqlToOperationConverter.java|  8 
 .../table/planner/calcite/FlinkPlannerImpl.scala   |  1 +
 .../operations/SqlToOperationConverterTest.java| 14 +-
 21 files changed, 205 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
index d1f5f3e..67f7f7f 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
@@ -43,6 +43,7 @@ import 
org.apache.flink.table.operations.command.HelpOperation;
 import org.apache.flink.table.operations.command.QuitOperation;
 import org.apache.flink.table.operations.command.ResetOperation;
 import org.apache.flink.table.operations.command.SetOperation;
+import org.apache.flink.table.operations.command.ShowJarsOperation;
 import org.apache.flink.table.operations.ddl.AlterOperation;
 import org.apache.flink.table.operations.ddl.CreateOperation;
 import org.apache.flink.table.operations.ddl.DropOperation;
@@ -425,6 +426,9 @@ public class CliClient implements AutoCloseable {
 } else if (operation instanceof AddJarOperation) {
 // ADD JAR
 callAddJar((AddJarOperation) operation);
+} else if (operation instanceof ShowJarsOperation) {
+// SHOW JARS
+callShowJars();
 } else if (operation instanceof ShowCreateTableOperation) {
 // SHOW CREATE TABLE
 callShowCreateTable((ShowCreateTableOperation) operation);
@@ -440,6 +444,10 @@ public class CliClient implements AutoCloseable {
 printInfo(CliStrings.MESSAGE_ADD_JAR_STATEMENT);
 }
 
+private void callShowJars() {
+executor.listJars(sessionId).forEach(jar -> 
terminal.writer().println(jar));
+}
+
 private void callQuit() {
 printInfo(CliStrings.MESSAGE_QUIT);
 isRunning = false;
diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/Executor.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/Executor.java
index d29d1ab..d76fe5f 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/Executor.java
+++ 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/Executor.java
@@ -142,4 +142,7 @@ public interface Executor {
 
 /** Add the JAR resource to into the classloader with specified session. */
 void addJar(String sessionId, String jarPath);
+
+/** List the JAR resources of the classloader with specified session. */
+List listJars(String sessionId);
 }
diff --git 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/context/SessionContext.java
 
b/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/context/SessionContext.java
index acaf90a..8c859b2 100644
--- 
a/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/context/SessionContext.java
+++ 

[flink] branch master updated (884ff61 -> b582991)

2021-05-31 Thread jqin
This is an automated email from the ASF dual-hosted git repository.

jqin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 884ff61  [FLINK-22782][docs] Remove legacy planner from Chinese docs
 add b582991  [FLINK-22722][docs/kafka] Add documentation for Kafka new 
source (#15974)

No new revisions were added by this update.

Summary of changes:
 docs/content/docs/connectors/datastream/kafka.md | 215 ++-
 1 file changed, 212 insertions(+), 3 deletions(-)


[flink] branch master updated (e920950 -> 884ff61)

2021-05-31 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from e920950  [hotfix][docs] Improve soft deprecation message for DataSet 
users
 add 884ff61  [FLINK-22782][docs] Remove legacy planner from Chinese docs

No new revisions were added by this update.

Summary of changes:
 .../docs/connectors/table/hive/hive_dialect.md |   4 +-
 .../docs/connectors/table/hive/overview.md |   6 +-
 docs/content.zh/docs/connectors/table/jdbc.md  |   2 +-
 .../docs/dev/python/dependency_management.md   |   2 +-
 docs/content.zh/docs/dev/python/python_config.md   |   2 +-
 .../docs/dev/python/table/intro_to_table_api.md|  20 +-
 .../table/operations/row_based_operations.md   |  10 +-
 .../python/table/python_table_api_connectors.md|   2 +-
 .../docs/dev/python/table/udfs/python_udfs.md  |  12 +-
 .../python/table/udfs/vectorized_python_udfs.md|   4 +-
 .../docs/dev/python/table_api_tutorial.md  |   4 +-
 docs/content.zh/docs/dev/table/catalogs.md |   2 +-
 docs/content.zh/docs/dev/table/common.md   | 282 +
 docs/content.zh/docs/dev/table/data_stream_api.md  | 207 ++-
 docs/content.zh/docs/dev/table/functions/udfs.md   |  12 +-
 docs/content.zh/docs/dev/table/modules.md  |  12 +-
 docs/content.zh/docs/dev/table/overview.md |  78 ++
 docs/content.zh/docs/dev/table/sql/explain.md  |  38 ---
 docs/content.zh/docs/dev/table/tableApi.md |  37 ++-
 docs/content/docs/dev/python/python_config.md  |   2 +-
 docs/content/docs/dev/table/tableApi.md|   2 +-
 21 files changed, 238 insertions(+), 502 deletions(-)


[flink-jira-bot] 01/01: [hotfix] add missing "days" after number of days

2021-05-31 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch NicoK-patch-1
in repository https://gitbox.apache.org/repos/asf/flink-jira-bot.git

commit 1f593d9718b4d667de356ab3414767e05dd1ec3b
Author: Nico Kruber 
AuthorDate: Mon May 31 13:52:51 2021 +0200

[hotfix] add missing "days" after number of days
---
 config.yaml | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/config.yaml b/config.yaml
index f8aa8cf..5c949a7 100644
--- a/config.yaml
+++ b/config.yaml
@@ -23,12 +23,12 @@ stale_assigned:
 warning_label: "stale-assigned"
 done_label: "auto-unassigned"
 warning_comment: |
-I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] 
and I help the community manage its development. I see this issue is assigned 
but has not received an update in {stale_days}, so it has been labeled 
"{warning_label}".
+I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] 
and I help the community manage its development. I see this issue is assigned 
but has not received an update in {stale_days} days, so it has been labeled 
"{warning_label}".
 If you are still working on the issue, please remove the label and add 
a comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
 If you are no longer working on the issue, please unassign yourself so 
someone else may work on it. If the "warning_label" label is not removed in 
{warning_days} days, the issue will be automatically unassigned.
 
 done_comment: |
-This issue was marked "{warning_label}" {warning_days} ago and has not 
received an update. I have automatically removed the current assignee from the 
issue so others in the community may pick it up. If you are still working on 
this ticket, please ask a committer to reassign you and provide an update about 
your current status.
+This issue was marked "{warning_label}" {warning_days} days ago and 
has not received an update. I have automatically removed the current assignee 
from the issue so others in the community may pick it up. If you are still 
working on this ticket, please ask a committer to reassign you and provide an 
update about your current status.
 
 stale_minor:
 ticket_limit: 10
@@ -40,7 +40,7 @@ stale_minor:
 I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] 
and I help the community manage its development. I noticed that neither this 
issue nor its subtasks had updates for {stale_days} days, so I labeled it 
"{warning_label}".  If you are still affected by this bug or are still 
interested in this issue, please update and remove the label.
 
 done_comment: |
-This issue was labeled "{warning_label}" {warning_days} ago and has 
not received any updates so I have gone ahead and closed it.  If you are still 
affected by this or would like to raise the priority of this ticket please 
re-open, removing the label "{done_label}" and raise the ticket priority 
accordingly.
+This issue was labeled "{warning_label}" {warning_days} days ago and 
has not received any updates so I have gone ahead and closed it.  If you are 
still affected by this or would like to raise the priority of this ticket 
please re-open, removing the label "{done_label}" and raise the ticket priority 
accordingly.
 
 stale_blocker:
 ticket_limit: 5
@@ -52,7 +52,7 @@ stale_blocker:
 I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] 
and I help the community manage its development. I see this issues has been 
marked as a Blocker but is unassigned and neither itself nor its Sub-Tasks have 
been updated for {stale_days} days. I have gone ahead and marked it 
"{warning_label}". If this ticket is a Blocker, please either assign yourself 
or give an update. Afterwards, please remove the label or in {warning_days} 
days the issue will be deprioritized.
 
 done_comment: |
-This issue was labeled "{warning_label}" {warning_days} ago and has 
not received any updates so it is being deprioritized. If this ticket is 
actually a Blocker, please raise the priority and ask a committer to assign you 
the issue or revive the public discussion.
+This issue was labeled "{warning_label}" {warning_days} days ago and 
has not received any updates so it is being deprioritized. If this ticket is 
actually a Blocker, please raise the priority and ask a committer to assign you 
the issue or revive the public discussion.
 
 stale_critical:
 ticket_limit: 10
@@ -64,7 +64,7 @@ stale_critical:
 I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] 
and I help the community manage its development. I see this issues has been 
marked as Critical but is unassigned and neither itself nor its Sub-Tasks have 
been updated for 

[flink-jira-bot] branch NicoK-patch-1 created (now 1f593d9)

2021-05-31 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a change to branch NicoK-patch-1
in repository https://gitbox.apache.org/repos/asf/flink-jira-bot.git.


  at 1f593d9  [hotfix] add missing "days" after number of days

This branch includes the following new commits:

 new 1f593d9  [hotfix] add missing "days" after number of days

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[flink-docker] branch dev-1.13 updated: Release 1.13.1

2021-05-31 Thread dwysakowicz
This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a commit to branch dev-1.13
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.13 by this push:
 new a578f75  Release 1.13.1
a578f75 is described below

commit a578f7563edcf508922fb18ead081a0333aac349
Author: Dawid Wysakowicz 
AuthorDate: Fri May 28 17:39:35 2021 +0200

Release 1.13.1
---
 add-version.sh  | 2 ++
 testing/run_travis_tests.sh | 7 ++-
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/add-version.sh b/add-version.sh
index 01606e5..99a7987 100755
--- a/add-version.sh
+++ b/add-version.sh
@@ -92,6 +92,8 @@ elif [ "$flink_version" = "1.12.0" ]; then
 gpg_key="D9839159"
 elif [ "$flink_version" = "1.13.0" ]; then
 gpg_key="31D2DD10BFC15A2D"
+elif [ "$flink_version" = "1.13.1" ]; then
+gpg_key="31D2DD10BFC15A2D"
 else
 error "Missing GPG key ID for this release"
 fi
diff --git a/testing/run_travis_tests.sh b/testing/run_travis_tests.sh
index c1565f4..b9f0471 100755
--- a/testing/run_travis_tests.sh
+++ b/testing/run_travis_tests.sh
@@ -11,12 +11,9 @@ fi
 
 BRANCH="$TRAVIS_BRANCH"
 
-test_docker_entrypoint
-
-./add-custom.sh -u 
"https://s3.amazonaws.com/flink-nightly/flink-1.13-SNAPSHOT-bin-scala_2.11.tgz; 
-n test-java8
+./add-version.sh -r 1.13 -f 1.13.1
 
-# test Flink with Java11 image as well
-./add-custom.sh -u 
"https://s3.amazonaws.com/flink-nightly/flink-1.13-SNAPSHOT-bin-scala_2.11.tgz; 
-j 11 -n test-java11
+test_docker_entrypoint
 
 smoke_test_all_images
 smoke_test_one_image_non_root


[flink] branch release-1.13 updated: [hotfix][docs] Improve soft deprecation message for DataSet users

2021-05-31 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch release-1.13
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.13 by this push:
 new 617fd07  [hotfix][docs] Improve soft deprecation message for DataSet 
users
617fd07 is described below

commit 617fd071cf1bc7b3e38ae1b998c6578a0bfb66fe
Author: Timo Walther 
AuthorDate: Mon May 31 12:17:13 2021 +0200

[hotfix][docs] Improve soft deprecation message for DataSet users
---
 docs/content.zh/docs/dev/dataset/overview.md | 14 --
 docs/content/docs/dev/dataset/overview.md| 14 --
 2 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/docs/content.zh/docs/dev/dataset/overview.md 
b/docs/content.zh/docs/dev/dataset/overview.md
index 8faee27..6b51e8e 100644
--- a/docs/content.zh/docs/dev/dataset/overview.md
+++ b/docs/content.zh/docs/dev/dataset/overview.md
@@ -33,8 +33,18 @@ Please refer to the DataStream API overview for an 
introduction to the basic con
 
 In order to create your own Flink DataSet program, we encourage you to start 
with the anatomy of a Flink Program and gradually add your own transformations. 
The remaining sections act as references for additional operations and advanced 
features.
 
-{{< hint info >}}
-Starting with Flink 1.12 the DataSet has been soft deprecated. We recommend 
that you use the DataStream API with `BATCH` execution mode. The linked section 
also outlines cases where it makes sense to use the DataSet API but those cases 
will become rarer as development progresses and the DataSet API will eventually 
be removed. Please also see FLIP-131 for background information on this 
decision. 
+{{< hint warning >}}
+Starting with Flink 1.12 the DataSet API has been soft deprecated.
+
+We recommend that you use the [Table API and SQL]({{< ref 
"docs/dev/table/overview" >}}) to run efficient
+batch pipelines in a fully unified API. Table API is well integrated with 
common batch connectors and
+catalogs.
+
+Alternatively, you can also use the DataStream API with `BATCH` [execution 
mode]({{< ref "docs/dev/datastream/execution_mode" >}}).
+The linked section also outlines cases where it makes sense to use the DataSet 
API but those cases will
+become rarer as development progresses and the DataSet API will eventually be 
removed. Please also
+see 
[FLIP-131](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866741)
 for
+background information on this decision.
 {{< /hint >}}
 
 ## Example Program
diff --git a/docs/content/docs/dev/dataset/overview.md 
b/docs/content/docs/dev/dataset/overview.md
index 97ecb48..bbd33b6 100644
--- a/docs/content/docs/dev/dataset/overview.md
+++ b/docs/content/docs/dev/dataset/overview.md
@@ -33,8 +33,18 @@ Please refer to the DataStream API overview for an 
introduction to the basic con
 
 In order to create your own Flink DataSet program, we encourage you to start 
with the anatomy of a Flink Program and gradually add your own transformations. 
The remaining sections act as references for additional operations and advanced 
features.
 
-{{< hint info >}}
-Starting with Flink 1.12 the DataSet has been soft deprecated. We recommend 
that you use the DataStream API with `BATCH` execution mode. The linked section 
also outlines cases where it makes sense to use the DataSet API but those cases 
will become rarer as development progresses and the DataSet API will eventually 
be removed. Please also see FLIP-131 for background information on this 
decision. 
+{{< hint warning >}}
+Starting with Flink 1.12 the DataSet API has been soft deprecated.
+
+We recommend that you use the [Table API and SQL]({{< ref 
"docs/dev/table/overview" >}}) to run efficient
+batch pipelines in a fully unified API. Table API is well integrated with 
common batch connectors and
+catalogs.
+
+Alternatively, you can also use the DataStream API with `BATCH` [execution 
mode]({{< ref "docs/dev/datastream/execution_mode" >}}).
+The linked section also outlines cases where it makes sense to use the DataSet 
API but those cases will
+become rarer as development progresses and the DataSet API will eventually be 
removed. Please also
+see 
[FLIP-131](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866741)
 for
+background information on this decision.
 {{< /hint >}}
 
 ## Example Program


[flink] branch master updated (c0d216b -> e920950)

2021-05-31 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from c0d216b  [FLINK-22759][docs] Correct the applicability of some RocksDB 
related options as per operator
 add e920950  [hotfix][docs] Improve soft deprecation message for DataSet 
users

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/dev/dataset/overview.md | 14 --
 docs/content/docs/dev/dataset/overview.md| 14 --
 2 files changed, 24 insertions(+), 4 deletions(-)


[flink-docker] branch master updated: Update Dockerfiles for 1.13.1 release

2021-05-31 Thread dwysakowicz
This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ddcc63  Update Dockerfiles for 1.13.1 release
4ddcc63 is described below

commit 4ddcc63ad5c333857eb6230da018505f51776d5e
Author: Dawid Wysakowicz 
AuthorDate: Fri May 28 17:48:19 2021 +0200

Update Dockerfiles for 1.13.1 release
---
 1.13/scala_2.11-java11-debian/Dockerfile   | 4 ++--
 1.13/scala_2.11-java11-debian/release.metadata | 2 +-
 1.13/scala_2.11-java8-debian/Dockerfile| 4 ++--
 1.13/scala_2.11-java8-debian/release.metadata  | 2 +-
 1.13/scala_2.12-java11-debian/Dockerfile   | 4 ++--
 1.13/scala_2.12-java11-debian/release.metadata | 2 +-
 1.13/scala_2.12-java8-debian/Dockerfile| 4 ++--
 1.13/scala_2.12-java8-debian/release.metadata  | 2 +-
 8 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/1.13/scala_2.11-java11-debian/Dockerfile 
b/1.13/scala_2.11-java11-debian/Dockerfile
index 81fff51..a460165 100644
--- a/1.13/scala_2.11-java11-debian/Dockerfile
+++ b/1.13/scala_2.11-java11-debian/Dockerfile
@@ -44,8 +44,8 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.13.0/flink-1.13.0-bin-scala_2.11.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.0/flink-1.13.0-bin-scala_2.11.tgz.asc
 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.13.1/flink-1.13.1-bin-scala_2.11.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.1/flink-1.13.1-bin-scala_2.11.tgz.asc
 \
 GPG_KEY=31D2DD10BFC15A2D \
 CHECK_GPG=true
 
diff --git a/1.13/scala_2.11-java11-debian/release.metadata 
b/1.13/scala_2.11-java11-debian/release.metadata
index abdc75e..11b89e9 100644
--- a/1.13/scala_2.11-java11-debian/release.metadata
+++ b/1.13/scala_2.11-java11-debian/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.13.0-scala_2.11-java11, 1.13-scala_2.11-java11, scala_2.11-java11
+Tags: 1.13.1-scala_2.11-java11, 1.13-scala_2.11-java11, scala_2.11-java11
 Architectures: amd64
diff --git a/1.13/scala_2.11-java8-debian/Dockerfile 
b/1.13/scala_2.11-java8-debian/Dockerfile
index a0e6e39..147c872 100644
--- a/1.13/scala_2.11-java8-debian/Dockerfile
+++ b/1.13/scala_2.11-java8-debian/Dockerfile
@@ -44,8 +44,8 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.13.0/flink-1.13.0-bin-scala_2.11.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.0/flink-1.13.0-bin-scala_2.11.tgz.asc
 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.13.1/flink-1.13.1-bin-scala_2.11.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.1/flink-1.13.1-bin-scala_2.11.tgz.asc
 \
 GPG_KEY=31D2DD10BFC15A2D \
 CHECK_GPG=true
 
diff --git a/1.13/scala_2.11-java8-debian/release.metadata 
b/1.13/scala_2.11-java8-debian/release.metadata
index 79660c3..0695291 100644
--- a/1.13/scala_2.11-java8-debian/release.metadata
+++ b/1.13/scala_2.11-java8-debian/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.13.0-scala_2.11-java8, 1.13-scala_2.11-java8, scala_2.11-java8, 
1.13.0-scala_2.11, 1.13-scala_2.11, scala_2.11
+Tags: 1.13.1-scala_2.11-java8, 1.13-scala_2.11-java8, scala_2.11-java8, 
1.13.1-scala_2.11, 1.13-scala_2.11, scala_2.11
 Architectures: amd64
diff --git a/1.13/scala_2.12-java11-debian/Dockerfile 
b/1.13/scala_2.12-java11-debian/Dockerfile
index 02697ca..e71fd61 100644
--- a/1.13/scala_2.12-java11-debian/Dockerfile
+++ b/1.13/scala_2.12-java11-debian/Dockerfile
@@ -44,8 +44,8 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.13.0/flink-1.13.0-bin-scala_2.12.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.0/flink-1.13.0-bin-scala_2.12.tgz.asc
 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.13.1/flink-1.13.1-bin-scala_2.12.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.1/flink-1.13.1-bin-scala_2.12.tgz.asc
 \
 GPG_KEY=31D2DD10BFC15A2D \
 CHECK_GPG=true
 
diff --git a/1.13/scala_2.12-java11-debian/release.metadata 
b/1.13/scala_2.12-java11-debian/release.metadata
index 83e0f05..53c21bd 100644
--- a/1.13/scala_2.12-java11-debian/release.metadata
+++ b/1.13/scala_2.12-java11-debian/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.13.0-scala_2.12-java11, 1.13-scala_2.12-java11, scala_2.12-java11, 
1.13.0-java11, 1.13-java11, java11
+Tags: 1.13.1-scala_2.12-java11, 1.13-scala_2.12-java11, scala_2.12-java11, 
1.13.1-java11, 1.13-java11, java11
 Architectures: amd64
diff --git a/1.13/scala_2.12-java8-debian/Dockerfile