[flink] branch release-1.15 updated: [FLINK-27315][docs] Fix the demo of MemoryStateBackendMigration

2022-04-20 Thread tangyun
This is an automated email from the ASF dual-hosted git repository.

tangyun pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.15 by this push:
 new eb0f5c37ce3 [FLINK-27315][docs] Fix the demo of 
MemoryStateBackendMigration
eb0f5c37ce3 is described below

commit eb0f5c37ce34f95415fe012355a4f33d032c23cd
Author: EchoLee5 <39044001+echol...@users.noreply.github.com>
AuthorDate: Wed Apr 20 16:18:26 2022 +0800

[FLINK-27315][docs] Fix the demo of MemoryStateBackendMigration
---
 docs/content.zh/docs/ops/state/state_backends.md | 4 ++--
 docs/content/docs/ops/state/state_backends.md| 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/content.zh/docs/ops/state/state_backends.md 
b/docs/content.zh/docs/ops/state/state_backends.md
index 6c8c4593686..a2c81ad2939 100644
--- a/docs/content.zh/docs/ops/state/state_backends.md
+++ b/docs/content.zh/docs/ops/state/state_backends.md
@@ -486,14 +486,14 @@ state.checkpoint-storage: jobmanager
 ```java
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStateBackend(new HashMapStateBackend());
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend());
+env.getCheckpointConfig().setCheckpointStorage(new 
JobManagerCheckpointStorage());
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment
 env.setStateBackend(new HashMapStateBackend)
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend)
+env.getCheckpointConfig().setCheckpointStorage(new JobManagerCheckpointStorage)
 ```
 {{< /tab >}}
 {{< /tabs>}}
diff --git a/docs/content/docs/ops/state/state_backends.md 
b/docs/content/docs/ops/state/state_backends.md
index 3465de8a35f..563f2ff4641 100644
--- a/docs/content/docs/ops/state/state_backends.md
+++ b/docs/content/docs/ops/state/state_backends.md
@@ -477,14 +477,14 @@ state.checkpoint-storage: jobmanager
 ```java
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStateBackend(new HashMapStateBackend());
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend());
+env.getCheckpointConfig().setCheckpointStorage(new 
JobManagerCheckpointStorage());
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment
 env.setStateBackend(new HashMapStateBackend)
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend)
+env.getCheckpointConfig().setCheckpointStorage(new JobManagerCheckpointStorage)
 ```
 {{< /tab >}}
 {{< /tabs>}}



[flink] branch release-1.14 updated: [FLINK-27315][docs] Fix the demo of MemoryStateBackendMigration

2022-04-20 Thread tangyun
This is an automated email from the ASF dual-hosted git repository.

tangyun pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
 new ee75ae34a5c [FLINK-27315][docs] Fix the demo of 
MemoryStateBackendMigration
ee75ae34a5c is described below

commit ee75ae34a5c85e914314405f3aeb0a955c56c6e6
Author: EchoLee5 <39044001+echol...@users.noreply.github.com>
AuthorDate: Wed Apr 20 16:18:26 2022 +0800

[FLINK-27315][docs] Fix the demo of MemoryStateBackendMigration
---
 docs/content.zh/docs/ops/state/state_backends.md | 4 ++--
 docs/content/docs/ops/state/state_backends.md| 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/content.zh/docs/ops/state/state_backends.md 
b/docs/content.zh/docs/ops/state/state_backends.md
index d90a1de9246..5c947a7804c 100644
--- a/docs/content.zh/docs/ops/state/state_backends.md
+++ b/docs/content.zh/docs/ops/state/state_backends.md
@@ -336,14 +336,14 @@ state.checkpoint-storage: jobmanager
 ```java
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStateBackend(new HashMapStateBackend());
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend());
+env.getCheckpointConfig().setCheckpointStorage(new 
JobManagerCheckpointStorage());
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment
 env.setStateBackend(new HashMapStateBackend)
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend)
+env.getCheckpointConfig().setCheckpointStorage(new JobManagerCheckpointStorage)
 ```
 {{< /tab >}}
 {{< /tabs>}}
diff --git a/docs/content/docs/ops/state/state_backends.md 
b/docs/content/docs/ops/state/state_backends.md
index 085a12dcfc8..74ec6df83a8 100644
--- a/docs/content/docs/ops/state/state_backends.md
+++ b/docs/content/docs/ops/state/state_backends.md
@@ -356,14 +356,14 @@ state.checkpoint-storage: jobmanager
 ```java
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStateBackend(new HashMapStateBackend());
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend());
+env.getCheckpointConfig().setCheckpointStorage(new 
JobManagerCheckpointStorage());
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment
 env.setStateBackend(new HashMapStateBackend)
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend)
+env.getCheckpointConfig().setCheckpointStorage(new JobManagerCheckpointStorage)
 ```
 {{< /tab >}}
 {{< /tabs>}}



[flink] branch master updated: [FLINK-27315][docs] Fix the demo of MemoryStateBackendMigration

2022-04-20 Thread tangyun
This is an automated email from the ASF dual-hosted git repository.

tangyun pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f5b2e6177bf [FLINK-27315][docs] Fix the demo of 
MemoryStateBackendMigration
f5b2e6177bf is described below

commit f5b2e6177bf61097a497f0f6040131e17ab32a62
Author: EchoLee5 <39044001+echol...@users.noreply.github.com>
AuthorDate: Wed Apr 20 16:18:26 2022 +0800

[FLINK-27315][docs] Fix the demo of MemoryStateBackendMigration
---
 docs/content.zh/docs/ops/state/state_backends.md | 4 ++--
 docs/content/docs/ops/state/state_backends.md| 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/content.zh/docs/ops/state/state_backends.md 
b/docs/content.zh/docs/ops/state/state_backends.md
index 6c8c4593686..a2c81ad2939 100644
--- a/docs/content.zh/docs/ops/state/state_backends.md
+++ b/docs/content.zh/docs/ops/state/state_backends.md
@@ -486,14 +486,14 @@ state.checkpoint-storage: jobmanager
 ```java
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStateBackend(new HashMapStateBackend());
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend());
+env.getCheckpointConfig().setCheckpointStorage(new 
JobManagerCheckpointStorage());
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment
 env.setStateBackend(new HashMapStateBackend)
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend)
+env.getCheckpointConfig().setCheckpointStorage(new JobManagerCheckpointStorage)
 ```
 {{< /tab >}}
 {{< /tabs>}}
diff --git a/docs/content/docs/ops/state/state_backends.md 
b/docs/content/docs/ops/state/state_backends.md
index 3465de8a35f..563f2ff4641 100644
--- a/docs/content/docs/ops/state/state_backends.md
+++ b/docs/content/docs/ops/state/state_backends.md
@@ -477,14 +477,14 @@ state.checkpoint-storage: jobmanager
 ```java
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStateBackend(new HashMapStateBackend());
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend());
+env.getCheckpointConfig().setCheckpointStorage(new 
JobManagerCheckpointStorage());
 ```
 {{< /tab >}}
 {{< tab "Scala" >}}
 ```scala
 val env = StreamExecutionEnvironment.getExecutionEnvironment
 env.setStateBackend(new HashMapStateBackend)
-env.getCheckpointConfig().setCheckpointStorage(new JobManagerStateBackend)
+env.getCheckpointConfig().setCheckpointStorage(new JobManagerCheckpointStorage)
 ```
 {{< /tab >}}
 {{< /tabs>}}



svn commit: r53987 - in /dev/flink/flink-1.15.0-rc4: ./ python/

2022-04-20 Thread gaoyunhaii
Author: gaoyunhaii
Date: Thu Apr 21 03:48:02 2022
New Revision: 53987

Log:
Add flink-1.15.0-rc4

Added:
dev/flink/flink-1.15.0-rc4/
dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz   (with props)
dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.asc
dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.sha512
dev/flink/flink-1.15.0-rc4/flink-1.15.0-src.tgz   (with props)
dev/flink/flink-1.15.0-rc4/flink-1.15.0-src.tgz.asc
dev/flink/flink-1.15.0-rc4/flink-1.15.0-src.tgz.sha512
dev/flink/flink-1.15.0-rc4/python/
dev/flink/flink-1.15.0-rc4/python/apache-flink-1.15.0.tar.gz   (with props)
dev/flink/flink-1.15.0-rc4/python/apache-flink-1.15.0.tar.gz.asc
dev/flink/flink-1.15.0-rc4/python/apache-flink-1.15.0.tar.gz.sha512
dev/flink/flink-1.15.0-rc4/python/apache-flink-libraries-1.15.0.tar.gz   
(with props)
dev/flink/flink-1.15.0-rc4/python/apache-flink-libraries-1.15.0.tar.gz.asc

dev/flink/flink-1.15.0-rc4/python/apache-flink-libraries-1.15.0.tar.gz.sha512

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl
   (with props)

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl.asc

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl.sha512

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp36-cp36m-manylinux1_x86_64.whl
   (with props)

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp36-cp36m-manylinux1_x86_64.whl.asc

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp36-cp36m-manylinux1_x86_64.whl.sha512

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl
   (with props)

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl.asc

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl.sha512

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp37-cp37m-manylinux1_x86_64.whl
   (with props)

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp37-cp37m-manylinux1_x86_64.whl.asc

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp37-cp37m-manylinux1_x86_64.whl.sha512

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl
   (with props)

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl.asc

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl.sha512

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp38-cp38-manylinux1_x86_64.whl
   (with props)

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp38-cp38-manylinux1_x86_64.whl.asc

dev/flink/flink-1.15.0-rc4/python/apache_flink-1.15.0-cp38-cp38-manylinux1_x86_64.whl.sha512

Added: dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz
--
svn:mime-type = application/octet-stream

Added: dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.asc
==
--- dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.asc (added)
+++ dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.asc Thu Apr 21 
03:48:02 2022
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEEy+gr79gnsIr6hDl37b+SKnvISJcFAmJg0U0ACgkQ7b+SKnvI
+SJc41w/7Bu1RyH1nKAFX6hA9JbdNMPX/EdQxGrtBkTjlD4B4h6T6thE7jYymTpQv
+QvPDMCtjcz1FaJSfTqeaIqI0/hFkDP8rrDcHEpMmdYsZSQ6/A3H/tINJG9OoI4OC
+h+brvc6euXJHygx87PPQBnB/Ydv31Dbqu1W9xZOIHhhLDx/TWyfU0e7mIEkX2vB9
+DVtb5F3A+Io0OohCpDwYvJ8SLu0JCaJDtmbMGENKdGuLlgxBysKyRD3cPjExvivc
+4lvJrCxUqLPRCsZwd33H53Q9ImjoW3yejBTH/6sIKDup0F0ervD1OML5LdOm9caT
+jgCziCb1t5EIfzcWx0O3/GnG3wRB0sMVnIF2L0UqMA+v5kHedYYkkwr0oGfCiUHx
+z1B0qXyKureBlcg9FA5W5dNT/qDMHpJrlqoCyrPH7VmRxU0dJKukIsFoIuXpI/g7
+Vdl980rMEvgNVZ1WaBCk5KVmVeuzTbHSSaOAyCR0kgDhUdVPp0C+6O+UAW7NJCbe
+p1a5tu9vX0r8k1clgo+GoHOSxd4PXIrAPFjBU6vLznEyUnXuI0ctIS/vnlB749ag
+Par87pIb5IYcW7zKr56BEnavPmrMK6DKAdC3W4MlMZuoFJl6UJkEpJZKlqteoH4p
+6U+HOLWMT1KCHutpNf9O/6klqBKMKRNEb7RSPAJoUAzjM+Lyb/Q=
+=ysnQ
+-END PGP SIGNATURE-

Added: dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.sha512
==
--- dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.sha512 (added)
+++ dev/flink/flink-1.15.0-rc4/flink-1.15.0-bin-scala_2.12.tgz.sha512 Thu Apr 
21 03:48:02 2022
@@ -0,0 +1 @@
+2d0731c77c891cbf9133e17f73b850337822ce18af578c15e3868a564d99f0cbe1762c6f3f7ef07253c2936bdadbc28f8a6a6f25391abb14d7a4302de3c7615f
  flink-1.15.0-bin-scala_2.12.tgz

Added: dev/flink/flink-1.15.0-rc4/flink-1.15.0-src.tgz

[flink] annotated tag release-1.15.0-rc4 updated (3a4c11371e6 -> 9ce365b8e99)

2022-04-20 Thread gaoyunhaii
This is an automated email from the ASF dual-hosted git repository.

gaoyunhaii pushed a change to annotated tag release-1.15.0-rc4
in repository https://gitbox.apache.org/repos/asf/flink.git


*** WARNING: tag release-1.15.0-rc4 was modified! ***

from 3a4c11371e6 (commit)
  to 9ce365b8e99 (tag)
 tagging 3a4c11371e6f2aacd641d86c1d5b4fd86435f802 (commit)
 replaces pre-apache-rename
  by Yun Gao
  on Thu Apr 21 01:50:39 2022 +0800

- Log -
release-1.15.0-rc4
-BEGIN PGP SIGNATURE-

iQIzBAABCAAdFiEEy+gr79gnsIr6hDl37b+SKnvISJcFAmJgR+8ACgkQ7b+SKnvI
SJf7mQ//RRURbz+kbvcBw/nP5qd/f6Zw4zFkxpHhT4+0akaGlToGj2d97QNWETaN
nhMhQj4bR3TwEkhAX4cnmUhifqRNqe+MgdbZ68O1w/8s4igrdl4LOdtPV9vNXUZ/
1bRM9U2V42b3+jpYJndFNBZVfdGuhUXzRIJDkdz06Xx7ZSotwOPGkk+UB5kYEaj0
Lv3vJY5/AZNynjvq+3tXgQ5X9mFwRxrFgqzlBZdXg7Dm1bRofqvNL4wL8BWzvJqy
Hn7F3OkxmGTplRicRgfXWMW7GBly/kU+GxhDIv5qXSENkaL4Au8PaQ/eyOqse0Ae
pOpuumNC/L1MJph8uTcDWlKSp4aaXrq+CMu7FCB0d3azgwi6PmdiQG75RVzMuYoq
LEk3UBtUoLknP4qxfyECXeay8aDEd6GrFSh2d+h9lO7IJlzG+ZtxxzzMDzcmOIwm
HIYGLRgINC5ZP+ez+S14iBapJYTxWy0CGwOy7JC2ow2mHrgTNk85Q6Av/c+ATEQ7
QTAN4FPqIvPL3njx0VwYoR/PQT38K+SY1w4zzLalc3UxodxN9zfa78Z3w1kJqtNq
pCi+b6CM5/wXQO9XO6Z+dB6DzR3TmMJmWYjDFGYX2iRR/TR0nXK1viDizORXwkEn
pIDPZzPUWTFvdsO09twFK9fEfQ9Y7hUpgOEzVkZOGC/FA1nolWs=
=rT+B
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:



[flink] branch master updated: [FLINK-27145][table] Support code gen for aggregate function with empty parameter

2022-04-20 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 27c3f47781f [FLINK-27145][table] Support code gen for aggregate 
function with empty parameter
27c3f47781f is described below

commit 27c3f47781f3a9f0b804219c7b8037fe4a00dc75
Author: luoyuxia 
AuthorDate: Thu Apr 21 10:27:07 2022 +0800

[FLINK-27145][table] Support code gen for aggregate function with empty 
parameter

This closes #19418
---
 .../flink/table/planner/codegen/agg/batch/AggCodeGenHelper.scala   | 4 ++--
 .../table/planner/runtime/batch/table/AggregationITCase.scala  | 7 +++
 .../apache/flink/table/planner/utils/UserDefinedAggFunctions.scala | 4 
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/agg/batch/AggCodeGenHelper.scala
 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/agg/batch/AggCodeGenHelper.scala
index 5d1556f0918..bb1135fc236 100644
--- 
a/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/agg/batch/AggCodeGenHelper.scala
+++ 
b/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/agg/batch/AggCodeGenHelper.scala
@@ -718,11 +718,11 @@ object AggCodeGenHelper {
 val externalAccTypeTerm = 
typeTerm(externalAccType.getConversionClass)
 val externalAccTerm = newName("acc")
 val externalAccCode = genToExternalConverter(ctx, externalAccType, 
aggBufferName)
+val aggParametersCode = s"""${(Seq(externalAccTerm) ++ 
operandTerms).mkString(", ")}"""
 s"""
|$externalAccTypeTerm $externalAccTerm = $externalAccCode;
|${functionIdentifiers(function)}.accumulate(
-   |  $externalAccTerm,
-   |  ${operandTerms.mkString(", ")});
+   | $aggParametersCode);
|$aggBufferName = ${genToInternalConverter(ctx, 
externalAccType)(externalAccTerm)};
|${aggBufferExpr.nullTerm} = false;
   """.stripMargin
diff --git 
a/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/table/AggregationITCase.scala
 
b/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/table/AggregationITCase.scala
index d1a4296597f..ca62f705f37 100644
--- 
a/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/table/AggregationITCase.scala
+++ 
b/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/table/AggregationITCase.scala
@@ -246,15 +246,22 @@ class AggregationITCase extends BatchTestBase {
 val t2 = BatchTableEnvUtil
   .fromCollection(tEnv, new mutable.MutableList[(Int, String)], "a, b")
   .select('a.sum, myAgg('b), 'a.count)
+// test agg with empty parameter
+val t3 = BatchTableEnvUtil
+  .fromCollection(tEnv, new mutable.MutableList[(Int, String)], "a, b")
+  .select('a.sum, myAgg(), 'a.count)
 
 val expected1 = "null,0"
 val expected2 = "null,0,0"
+val expected3 = "null,0,0"
 
 val results1 = executeQuery(t1)
 val results2 = executeQuery(t2)
+val results3 = executeQuery(t3)
 
 TestBaseUtils.compareResultAsText(results1.asJava, expected1)
 TestBaseUtils.compareResultAsText(results2.asJava, expected2)
+TestBaseUtils.compareResultAsText(results3.asJava, expected3)
 
   }
 
diff --git 
a/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/UserDefinedAggFunctions.scala
 
b/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/UserDefinedAggFunctions.scala
index 4938dda9a07..78d7b5f31c0 100644
--- 
a/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/UserDefinedAggFunctions.scala
+++ 
b/flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/UserDefinedAggFunctions.scala
@@ -132,6 +132,10 @@ class NonMergableCount extends AggregateFunction[Long, 
NonMergableCountAcc] {
 }
   }
 
+  def accumulate(acc: NonMergableCountAcc): Unit = {
+acc.count = acc.count + 1
+  }
+
   override def createAccumulator(): NonMergableCountAcc = 
NonMergableCountAcc(0)
 
   override def getValue(acc: NonMergableCountAcc): Long = acc.count



[flink-table-store] branch release-0.1 updated: [FLINK-27316] Prevent users from changing bucket number

2022-04-20 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.1
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.1 by this push:
 new 320bf8c  [FLINK-27316] Prevent users from changing bucket number
320bf8c is described below

commit 320bf8ce312ba4b1b99a4c20eac3d2aae956293d
Author: Jane Chan <55568005+ladyfor...@users.noreply.github.com>
AuthorDate: Thu Apr 21 10:15:20 2022 +0800

[FLINK-27316] Prevent users from changing bucket number

This closes #96
---
 .../store/connector/ReadWriteTableITCase.java  | 60 ++
 .../flink/table/store/file/FileStoreImpl.java  |  6 ++-
 .../store/file/operation/FileStoreScanImpl.java| 13 -
 3 files changed, 77 insertions(+), 2 deletions(-)

diff --git 
a/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
 
b/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
index 5d32712..8ed0076 100644
--- 
a/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
+++ 
b/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
@@ -65,6 +65,7 @@ import static 
org.apache.flink.table.store.connector.TableStoreFactoryOptions.SC
 import static 
org.apache.flink.table.store.connector.TableStoreFactoryOptions.SINK_PARALLELISM;
 import static 
org.apache.flink.table.store.connector.TableStoreTestBase.createResolvedTable;
 import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
 
 /** IT cases for managed table dml. */
 public class ReadWriteTableITCase extends ReadWriteTableTestBase {
@@ -1408,6 +1409,65 @@ public class ReadWriteTableITCase extends 
ReadWriteTableTestBase {
 changelogRow("+I", "Euro", 119L, "2022-01-02")));
 }
 
+@Test
+public void testChangeBucketNumber() throws Exception {
+rootPath = TEMPORARY_FOLDER.newFolder().getPath();
+tEnv = StreamTableEnvironment.create(buildBatchEnv(), 
EnvironmentSettings.inBatchMode());
+tEnv.executeSql(
+String.format(
+"CREATE TABLE IF NOT EXISTS rates (\n"
++ "currency STRING,\n"
++ " rate BIGINT\n"
++ ") WITH (\n"
++ " 'bucket' = '2',\n"
++ " 'path' = '%s'\n"
++ ")",
+rootPath));
+tEnv.executeSql("INSERT INTO rates VALUES('US Dollar', 102)").await();
+
+// increase bucket num from 2 to 3
+tEnv.executeSql("ALTER TABLE rates SET ('bucket' = '3')");
+assertThatThrownBy(
+() -> tEnv.executeSql("INSERT INTO rates VALUES('US 
Dollar', 102)").await())
+.hasRootCauseInstanceOf(IllegalStateException.class)
+.hasRootCauseMessage(
+"Bucket number has been changed. Manifest might be 
corrupted.");
+assertThatThrownBy(() -> tEnv.executeSql("SELECT * FROM 
rates").await())
+.hasRootCauseInstanceOf(IllegalStateException.class)
+.hasRootCauseMessage(
+"Bucket number has been changed. Manifest might be 
corrupted.");
+
+// decrease bucket num from 3 to 1
+tEnv.executeSql("ALTER TABLE rates RESET ('bucket')");
+assertThatThrownBy(() -> tEnv.executeSql("SELECT * FROM 
rates").await())
+.hasRootCauseInstanceOf(IllegalStateException.class)
+.hasRootCauseMessage(
+"Bucket number has been changed. Manifest might be 
corrupted.");
+}
+
+@Test
+public void testSuccessiveWriteAndRead() throws Exception {
+String managedTable =
+collectAndCheckBatchReadWrite(false, false, null, 
Collections.emptyList(), rates());
+// write rates() twice
+tEnv.executeSql(
+String.format(
+"INSERT INTO `%s`"
++ " VALUES ('US Dollar', 102),\n"
++ "('Euro', 114),\n"
++ "('Yen', 1),\n"
++ "('Euro', 114),\n"
++ "('Euro', 119)",
+managedTable))
+.await();
+collectAndCheck(
+tEnv,
+managedTable,
+Collections.emptyMap(),
+"currency = 'Yen'",
+Collections.nCopies(2, changelogRow("+I", "Yen", 1L)));
+}
+
 // 

[flink-table-store] branch master updated: [FLINK-27316] Prevent users from changing bucket number

2022-04-20 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new c9feaf5  [FLINK-27316] Prevent users from changing bucket number
c9feaf5 is described below

commit c9feaf5b66d0c87fc228fc1e2b2ead1a7669eb54
Author: Jane Chan <55568005+ladyfor...@users.noreply.github.com>
AuthorDate: Thu Apr 21 10:15:20 2022 +0800

[FLINK-27316] Prevent users from changing bucket number

This closes #96
---
 .../store/connector/ReadWriteTableITCase.java  | 60 ++
 .../flink/table/store/file/FileStoreImpl.java  |  6 ++-
 .../store/file/operation/FileStoreScanImpl.java| 13 -
 3 files changed, 77 insertions(+), 2 deletions(-)

diff --git 
a/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
 
b/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
index 5d32712..8ed0076 100644
--- 
a/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
+++ 
b/flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/ReadWriteTableITCase.java
@@ -65,6 +65,7 @@ import static 
org.apache.flink.table.store.connector.TableStoreFactoryOptions.SC
 import static 
org.apache.flink.table.store.connector.TableStoreFactoryOptions.SINK_PARALLELISM;
 import static 
org.apache.flink.table.store.connector.TableStoreTestBase.createResolvedTable;
 import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
 
 /** IT cases for managed table dml. */
 public class ReadWriteTableITCase extends ReadWriteTableTestBase {
@@ -1408,6 +1409,65 @@ public class ReadWriteTableITCase extends 
ReadWriteTableTestBase {
 changelogRow("+I", "Euro", 119L, "2022-01-02")));
 }
 
+@Test
+public void testChangeBucketNumber() throws Exception {
+rootPath = TEMPORARY_FOLDER.newFolder().getPath();
+tEnv = StreamTableEnvironment.create(buildBatchEnv(), 
EnvironmentSettings.inBatchMode());
+tEnv.executeSql(
+String.format(
+"CREATE TABLE IF NOT EXISTS rates (\n"
++ "currency STRING,\n"
++ " rate BIGINT\n"
++ ") WITH (\n"
++ " 'bucket' = '2',\n"
++ " 'path' = '%s'\n"
++ ")",
+rootPath));
+tEnv.executeSql("INSERT INTO rates VALUES('US Dollar', 102)").await();
+
+// increase bucket num from 2 to 3
+tEnv.executeSql("ALTER TABLE rates SET ('bucket' = '3')");
+assertThatThrownBy(
+() -> tEnv.executeSql("INSERT INTO rates VALUES('US 
Dollar', 102)").await())
+.hasRootCauseInstanceOf(IllegalStateException.class)
+.hasRootCauseMessage(
+"Bucket number has been changed. Manifest might be 
corrupted.");
+assertThatThrownBy(() -> tEnv.executeSql("SELECT * FROM 
rates").await())
+.hasRootCauseInstanceOf(IllegalStateException.class)
+.hasRootCauseMessage(
+"Bucket number has been changed. Manifest might be 
corrupted.");
+
+// decrease bucket num from 3 to 1
+tEnv.executeSql("ALTER TABLE rates RESET ('bucket')");
+assertThatThrownBy(() -> tEnv.executeSql("SELECT * FROM 
rates").await())
+.hasRootCauseInstanceOf(IllegalStateException.class)
+.hasRootCauseMessage(
+"Bucket number has been changed. Manifest might be 
corrupted.");
+}
+
+@Test
+public void testSuccessiveWriteAndRead() throws Exception {
+String managedTable =
+collectAndCheckBatchReadWrite(false, false, null, 
Collections.emptyList(), rates());
+// write rates() twice
+tEnv.executeSql(
+String.format(
+"INSERT INTO `%s`"
++ " VALUES ('US Dollar', 102),\n"
++ "('Euro', 114),\n"
++ "('Yen', 1),\n"
++ "('Euro', 114),\n"
++ "('Euro', 119)",
+managedTable))
+.await();
+collectAndCheck(
+tEnv,
+managedTable,
+Collections.emptyMap(),
+"currency = 'Yen'",
+Collections.nCopies(2, changelogRow("+I", "Yen", 1L)));
+}
+
 //  Tools 

[flink] 01/02: [FLINK-27206][metrics] Remove reflection annotations from reporters

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 200d0c144454664c978eca8547d80b4f6c558731
Author: Chesnay Schepler 
AuthorDate: Tue Apr 12 15:51:44 2022 +0200

[FLINK-27206][metrics] Remove reflection annotations from reporters
---
 .../content.zh/docs/deployment/metric_reporters.md |  6 +-
 docs/content/docs/deployment/metric_reporters.md   |  6 +-
 .../tests/PrometheusReporterEndToEndITCase.java| 74 --
 .../flink/metrics/datadog/DatadogHttpReporter.java |  3 -
 .../datadog/DatadogHttpReporterFactory.java|  3 -
 .../flink/metrics/graphite/GraphiteReporter.java   |  3 -
 .../metrics/graphite/GraphiteReporterFactory.java  |  3 -
 .../flink/metrics/influxdb/InfluxdbReporter.java   |  3 -
 .../metrics/influxdb/InfluxdbReporterFactory.java  |  3 -
 .../org/apache/flink/metrics/jmx/JMXReporter.java  |  2 -
 .../flink/metrics/jmx/JMXReporterFactory.java  |  2 -
 .../prometheus/PrometheusPushGatewayReporter.java  |  4 --
 .../PrometheusPushGatewayReporterFactory.java  |  3 -
 .../metrics/prometheus/PrometheusReporter.java |  3 -
 .../prometheus/PrometheusReporterFactory.java  |  3 -
 .../apache/flink/metrics/slf4j/Slf4jReporter.java  |  2 -
 .../flink/metrics/slf4j/Slf4jReporterFactory.java  |  3 -
 .../flink/metrics/statsd/StatsDReporter.java   |  2 -
 .../metrics/statsd/StatsDReporterFactory.java  |  3 -
 19 files changed, 20 insertions(+), 111 deletions(-)

diff --git a/docs/content.zh/docs/deployment/metric_reporters.md 
b/docs/content.zh/docs/deployment/metric_reporters.md
index d65970bc13e..8fcf2dc9520 100644
--- a/docs/content.zh/docs/deployment/metric_reporters.md
+++ b/docs/content.zh/docs/deployment/metric_reporters.md
@@ -49,7 +49,7 @@ metrics.reporter.my_jmx_reporter.port: 9020-9040
 metrics.reporter.my_jmx_reporter.scope.variables.excludes: 
job_id;task_attempt_num
 metrics.reporter.my_jmx_reporter.scope.variables.additional: 
cluster_name:my_test_cluster,tag_name:tag_value
 
-metrics.reporter.my_other_reporter.class: 
org.apache.flink.metrics.graphite.GraphiteReporter
+metrics.reporter.my_other_reporter.factory.class: 
org.apache.flink.metrics.graphite.GraphiteReporterFactory
 metrics.reporter.my_other_reporter.host: 192.168.1.1
 metrics.reporter.my_other_reporter.port: 1
 ```
@@ -180,7 +180,7 @@ Parameters:
 Example configuration:
 
 ```yaml
-metrics.reporter.prom.class: 
org.apache.flink.metrics.prometheus.PrometheusReporter
+metrics.reporter.prom.factory.class: 
org.apache.flink.metrics.prometheus.PrometheusReporterFactory
 ```
 
 Flink metric types are mapped to Prometheus metric types as follows: 
@@ -206,7 +206,7 @@ Parameters:
 Example configuration:
 
 ```yaml
-metrics.reporter.promgateway.class: 
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
+metrics.reporter.promgateway.factory.class: 
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporterFactory
 metrics.reporter.promgateway.hostUrl: http://localhost:9091
 metrics.reporter.promgateway.jobName: myJob
 metrics.reporter.promgateway.randomJobNameSuffix: true
diff --git a/docs/content/docs/deployment/metric_reporters.md 
b/docs/content/docs/deployment/metric_reporters.md
index 0f2eb250970..fb7ece29c4b 100644
--- a/docs/content/docs/deployment/metric_reporters.md
+++ b/docs/content/docs/deployment/metric_reporters.md
@@ -49,7 +49,7 @@ metrics.reporter.my_jmx_reporter.port: 9020-9040
 metrics.reporter.my_jmx_reporter.scope.variables.excludes: 
job_id;task_attempt_num
 metrics.reporter.my_jmx_reporter.scope.variables.additional: 
cluster_name:my_test_cluster,tag_name:tag_value
 
-metrics.reporter.my_other_reporter.class: 
org.apache.flink.metrics.graphite.GraphiteReporter
+metrics.reporter.my_other_reporter.factory.class: 
org.apache.flink.metrics.graphite.GraphiteReporterFactory
 metrics.reporter.my_other_reporter.host: 192.168.1.1
 metrics.reporter.my_other_reporter.port: 1
 ```
@@ -180,7 +180,7 @@ Parameters:
 Example configuration:
 
 ```yaml
-metrics.reporter.prom.class: 
org.apache.flink.metrics.prometheus.PrometheusReporter
+metrics.reporter.prom.factory.class: 
org.apache.flink.metrics.prometheus.PrometheusReporterFactory
 ```
 
 Flink metric types are mapped to Prometheus metric types as follows: 
@@ -206,7 +206,7 @@ Parameters:
 Example configuration:
 
 ```yaml
-metrics.reporter.promgateway.class: 
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
+metrics.reporter.promgateway.factory.class: 
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporterFactory
 metrics.reporter.promgateway.hostUrl: http://localhost:9091
 metrics.reporter.promgateway.jobName: myJob
 metrics.reporter.promgateway.randomJobNameSuffix: true
diff --git 
a/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/src/test/java/org/apache/flink/metrics/prometheus/tests/PrometheusReporterEndToEndITCase.java
 

[flink] branch master updated (e77cd23dff7 -> 77ae6dd63c9)

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from e77cd23dff7 [FLINK-27325][build] Remove custom forkCount settings
 new 200d0c14445 [FLINK-27206][metrics] Remove reflection annotations from 
reporters
 new 77ae6dd63c9 [FLINK-27206][metrics] Deprecate reflection-based reporter 
instantiation

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../content.zh/docs/deployment/metric_reporters.md | 15 ++---
 docs/content/docs/deployment/metric_reporters.md   | 15 ++---
 .../shortcodes/generated/metric_configuration.html |  6 --
 .../generated/metric_reporters_section.html|  6 --
 .../apache/flink/configuration/MetricOptions.java  |  4 +-
 .../tests/PrometheusReporterEndToEndITCase.java| 74 --
 .../metrics/reporter/InstantiateViaFactory.java|  3 +
 .../InterceptInstantiationViaReflection.java   |  2 +
 .../flink/metrics/reporter/MetricReporter.java |  6 +-
 .../metrics/reporter/MetricReporterFactory.java|  4 --
 .../flink/metrics/datadog/DatadogHttpReporter.java |  3 -
 .../datadog/DatadogHttpReporterFactory.java|  3 -
 .../flink/metrics/graphite/GraphiteReporter.java   |  3 -
 .../metrics/graphite/GraphiteReporterFactory.java  |  3 -
 .../flink/metrics/influxdb/InfluxdbReporter.java   |  3 -
 .../metrics/influxdb/InfluxdbReporterFactory.java  |  3 -
 .../org/apache/flink/metrics/jmx/JMXReporter.java  |  2 -
 .../flink/metrics/jmx/JMXReporterFactory.java  |  2 -
 .../prometheus/PrometheusPushGatewayReporter.java  |  4 --
 .../PrometheusPushGatewayReporterFactory.java  |  3 -
 .../metrics/prometheus/PrometheusReporter.java |  3 -
 .../prometheus/PrometheusReporterFactory.java  |  3 -
 .../apache/flink/metrics/slf4j/Slf4jReporter.java  |  2 -
 .../flink/metrics/slf4j/Slf4jReporterFactory.java  |  3 -
 .../flink/metrics/statsd/StatsDReporter.java   |  2 -
 .../metrics/statsd/StatsDReporterFactory.java  |  3 -
 .../flink/runtime/metrics/ReporterSetup.java   | 21 +++---
 .../flink/runtime/metrics/ReporterSetupTest.java   | 14 ++--
 28 files changed, 58 insertions(+), 157 deletions(-)



[flink] 02/02: [FLINK-27206][metrics] Deprecate reflection-based reporter instantiation

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 77ae6dd63c92fcf897184a929118d57e25800e6a
Author: Chesnay Schepler 
AuthorDate: Tue Apr 12 15:52:10 2022 +0200

[FLINK-27206][metrics] Deprecate reflection-based reporter instantiation
---
 docs/content.zh/docs/deployment/metric_reporters.md |  9 -
 docs/content/docs/deployment/metric_reporters.md|  9 -
 .../shortcodes/generated/metric_configuration.html  |  6 --
 .../generated/metric_reporters_section.html |  6 --
 .../apache/flink/configuration/MetricOptions.java   |  4 ++--
 .../metrics/reporter/InstantiateViaFactory.java |  3 +++
 .../InterceptInstantiationViaReflection.java|  2 ++
 .../flink/metrics/reporter/MetricReporter.java  |  6 +-
 .../metrics/reporter/MetricReporterFactory.java |  4 
 .../apache/flink/runtime/metrics/ReporterSetup.java | 21 -
 .../flink/runtime/metrics/ReporterSetupTest.java| 14 ++
 11 files changed, 38 insertions(+), 46 deletions(-)

diff --git a/docs/content.zh/docs/deployment/metric_reporters.md 
b/docs/content.zh/docs/deployment/metric_reporters.md
index 8fcf2dc9520..87d6605f8f5 100644
--- a/docs/content.zh/docs/deployment/metric_reporters.md
+++ b/docs/content.zh/docs/deployment/metric_reporters.md
@@ -36,7 +36,7 @@ Below is a list of parameters that are generally applicable 
to all reporters. Al
 
 {{< include_reporter_config 
"layouts/shortcodes/generated/metric_reporters_section.html" >}}
 
-All reporters must at least have either the `class` or `factory.class` 
property. Which property may/should be used depends on the reporter 
implementation. See the individual reporter configuration sections for more 
information.
+All reporter configurations must contain the `factory.class` property.
 Some reporters (referred to as `Scheduled`) allow specifying a reporting 
`interval`.
 
 Example reporter configuration that specifies multiple reporters:
@@ -54,10 +54,9 @@ metrics.reporter.my_other_reporter.host: 192.168.1.1
 metrics.reporter.my_other_reporter.port: 1
 ```
 
-**Important:** The jar containing the reporter must be accessible when Flink 
is started. Reporters that support the
- `factory.class` property can be loaded as [plugins]({{< ref 
"docs/deployment/filesystems/plugins" >}}). Otherwise the jar must be placed
- in the /lib folder. Reporters that are shipped with Flink (i.e., all 
reporters documented on this page) are available
- by default.
+**Important:** The jar containing the reporter must be accessible when Flink 
is started.
+ Reporters are loaded as [plugins]({{< ref 
"docs/deployment/filesystems/plugins" >}}). 
+ All reporters documented on this page are available by default.
 
 You can write your own `Reporter` by implementing the 
`org.apache.flink.metrics.reporter.MetricReporter` interface.
 If the Reporter should send out reports regularly you have to implement the 
`Scheduled` interface as well.
diff --git a/docs/content/docs/deployment/metric_reporters.md 
b/docs/content/docs/deployment/metric_reporters.md
index fb7ece29c4b..32b4ae16de1 100644
--- a/docs/content/docs/deployment/metric_reporters.md
+++ b/docs/content/docs/deployment/metric_reporters.md
@@ -36,7 +36,7 @@ Below is a list of parameters that are generally applicable 
to all reporters. Al
 
 {{< include_reporter_config 
"layouts/shortcodes/generated/metric_reporters_section.html" >}}
 
-All reporters must at least have either the `class` or `factory.class` 
property. Which property may/should be used depends on the reporter 
implementation. See the individual reporter configuration sections for more 
information.
+All reporter configurations must contain the `factory.class` property.
 Some reporters (referred to as `Scheduled`) allow specifying a reporting 
`interval`.
 
 Example reporter configuration that specifies multiple reporters:
@@ -54,10 +54,9 @@ metrics.reporter.my_other_reporter.host: 192.168.1.1
 metrics.reporter.my_other_reporter.port: 1
 ```
 
-**Important:** The jar containing the reporter must be accessible when Flink 
is started. Reporters that support the
- `factory.class` property can be loaded as [plugins]({{< ref 
"docs/deployment/filesystems/plugins" >}}). Otherwise the jar must be placed
- in the /lib folder. Reporters that are shipped with Flink (i.e., all 
reporters documented on this page) are available
- by default.
+**Important:** The jar containing the reporter must be accessible when Flink 
is started.
+ Reporters are loaded as [plugins]({{< ref 
"docs/deployment/filesystems/plugins" >}}).
+ All reporters documented on this page are available by default.
 
 You can write your own `Reporter` by implementing the 
`org.apache.flink.metrics.reporter.MetricReporter` interface.
 If the Reporter should send out reports regularly you have to implement the 
`Scheduled` interface as 

[flink] branch release-1.15 updated (bacce4e25ea -> 4fad1212e30)

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


from bacce4e25ea [FLINK-27319] Duplicated '-t' option for savepoint format 
and deployment target
 new 141671b476b [FLINK-27287][tests] Migrate tests to MiniClusterResource
 new 4fad1212e30 [FLINK-27287][tests] Use random ports

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../file/sink/FileSinkCompactionSwitchITCase.java  |   5 +-
 .../flink/connector/file/sink/FileSinkITBase.java  |   6 +-
 .../file/sink/writer/FileSinkMigrationITCase.java  |   6 +-
 .../util/JobManagerWatermarkTrackerTest.java   |  49 ++---
 .../formats/avro/AvroExternalJarProgramITCase.java |   1 +
 .../minicluster/MiniClusterConfiguration.java  |  15 +++
 .../FileExecutionGraphInfoStoreTest.java   |   3 +-
 .../MemoryExecutionGraphInfoStoreTest.java |   1 +
 .../network/partition/FileBufferReaderITCase.java  |   3 +-
 .../runtime/minicluster/MiniClusterITCase.java |  41 
 .../CoordinatorEventsExactlyOnceITCase.java|  41 +++-
 .../flink/runtime/shuffle/ShuffleMasterTest.java   |   3 +-
 .../runtime/taskexecutor/TaskExecutorITCase.java   |  49 -
 .../OperatorEventSendingCheckpointITCase.java  |   1 +
 .../CheckpointRestoreWithUidHashITCase.java| 112 ++---
 .../DefaultSchedulerLocalRecoveryITCase.java   |   4 +-
 .../flink/test/runtime/JobGraphRunningUtil.java|   3 +-
 .../flink/test/runtime/SchedulingITCase.java   |   4 +-
 .../PipelinedRegionSchedulingITCase.java   |   3 +-
 19 files changed, 141 insertions(+), 209 deletions(-)



[flink] 01/02: [FLINK-27287][tests] Migrate tests to MiniClusterResource

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 141671b476b65ed44cbdb7264557a1e32d8d42ee
Author: Chesnay Schepler 
AuthorDate: Tue Apr 19 13:52:21 2022 +0200

[FLINK-27287][tests] Migrate tests to MiniClusterResource
---
 .../util/JobManagerWatermarkTrackerTest.java   |  49 ++---
 .../CoordinatorEventsExactlyOnceITCase.java|  41 +++-
 .../runtime/taskexecutor/TaskExecutorITCase.java   |  49 -
 .../CheckpointRestoreWithUidHashITCase.java| 112 ++---
 4 files changed, 95 insertions(+), 156 deletions(-)

diff --git 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/util/JobManagerWatermarkTrackerTest.java
 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/util/JobManagerWatermarkTrackerTest.java
index 6b1f8d06b67..26a6af2e444 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/util/JobManagerWatermarkTrackerTest.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/util/JobManagerWatermarkTrackerTest.java
@@ -18,57 +18,30 @@
 package org.apache.flink.streaming.connectors.kinesis.util;
 
 import org.apache.flink.configuration.Configuration;
-import org.apache.flink.configuration.RestOptions;
-import org.apache.flink.runtime.minicluster.MiniCluster;
-import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
+import org.apache.flink.runtime.testutils.MiniClusterResource;
+import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.streaming.api.functions.sink.SinkFunction;
 import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
 
-import org.junit.AfterClass;
 import org.junit.Assert;
-import org.junit.BeforeClass;
+import org.junit.ClassRule;
 import org.junit.Test;
 
 /** Test for {@link JobManagerWatermarkTracker}. */
 public class JobManagerWatermarkTrackerTest {
 
-private static MiniCluster flink;
-
-@BeforeClass
-public static void setUp() throws Exception {
-final Configuration config = new Configuration();
-config.setInteger(RestOptions.PORT, 0);
-
-final MiniClusterConfiguration miniClusterConfiguration =
-new MiniClusterConfiguration.Builder()
-.setConfiguration(config)
-.setNumTaskManagers(1)
-.setNumSlotsPerTaskManager(1)
-.build();
-
-flink = new MiniCluster(miniClusterConfiguration);
-
-flink.start();
-}
-
-@AfterClass
-public static void tearDown() throws Exception {
-if (flink != null) {
-flink.close();
-}
-}
+@ClassRule
+public static final MiniClusterResource FLINK =
+new MiniClusterResource(
+new MiniClusterResourceConfiguration.Builder()
+.setNumberTaskManagers(1)
+.setNumberSlotsPerTaskManager(1)
+.build());
 
 @Test
 public void testUpateWatermark() throws Exception {
-final Configuration clientConfiguration = new Configuration();
-clientConfiguration.setInteger(RestOptions.RETRY_MAX_ATTEMPTS, 0);
-
-final StreamExecutionEnvironment env =
-StreamExecutionEnvironment.createRemoteEnvironment(
-flink.getRestAddress().get().getHost(),
-flink.getRestAddress().get().getPort(),
-clientConfiguration);
+final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 
 env.addSource(new TestSourceFunction(new 
JobManagerWatermarkTracker("fakeId")))
 .addSink(new SinkFunction() {});
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/operators/coordination/CoordinatorEventsExactlyOnceITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/operators/coordination/CoordinatorEventsExactlyOnceITCase.java
index 9bf4106c0d3..2e0c3f44143 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/operators/coordination/CoordinatorEventsExactlyOnceITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/operators/coordination/CoordinatorEventsExactlyOnceITCase.java
@@ -22,8 +22,6 @@ import org.apache.flink.api.common.JobExecutionResult;
 import org.apache.flink.api.common.accumulators.ListAccumulator;
 import org.apache.flink.configuration.ConfigOption;
 import org.apache.flink.configuration.ConfigOptions;
-import org.apache.flink.configuration.Configuration;
-import 

[flink] 02/02: [FLINK-27287][tests] Use random ports

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4fad1212e30074a8786213793224ca4b0ffcc401
Author: Chesnay Schepler 
AuthorDate: Tue Apr 19 12:26:23 2022 +0200

[FLINK-27287][tests] Use random ports
---
 .../file/sink/FileSinkCompactionSwitchITCase.java  |  5 +--
 .../flink/connector/file/sink/FileSinkITBase.java  |  6 +---
 .../file/sink/writer/FileSinkMigrationITCase.java  |  6 +---
 .../formats/avro/AvroExternalJarProgramITCase.java |  1 +
 .../minicluster/MiniClusterConfiguration.java  | 15 
 .../FileExecutionGraphInfoStoreTest.java   |  3 +-
 .../MemoryExecutionGraphInfoStoreTest.java |  1 +
 .../network/partition/FileBufferReaderITCase.java  |  3 +-
 .../runtime/minicluster/MiniClusterITCase.java | 41 +-
 .../flink/runtime/shuffle/ShuffleMasterTest.java   |  3 +-
 .../OperatorEventSendingCheckpointITCase.java  |  1 +
 .../DefaultSchedulerLocalRecoveryITCase.java   |  4 +--
 .../flink/test/runtime/JobGraphRunningUtil.java|  3 +-
 .../flink/test/runtime/SchedulingITCase.java   |  4 +--
 .../PipelinedRegionSchedulingITCase.java   |  3 +-
 15 files changed, 46 insertions(+), 53 deletions(-)

diff --git 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkCompactionSwitchITCase.java
 
b/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkCompactionSwitchITCase.java
index d4a68ca648c..aa49b72146f 100644
--- 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkCompactionSwitchITCase.java
+++ 
b/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkCompactionSwitchITCase.java
@@ -24,7 +24,6 @@ import org.apache.flink.api.common.state.ListState;
 import org.apache.flink.api.common.state.ListStateDescriptor;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.ExecutionOptions;
-import org.apache.flink.configuration.RestOptions;
 import org.apache.flink.configuration.StateChangelogOptions;
 import org.apache.flink.connector.file.sink.FileSink.DefaultRowFormatBuilder;
 import org.apache.flink.connector.file.sink.compactor.DecoderBasedReader;
@@ -178,13 +177,11 @@ public class FileSinkCompactionSwitchITCase extends 
TestLogger {
 JobGraph jobGraph = createJobGraph(cpPath, originFileSink, false, 
sendCountMap);
 JobGraph restoringJobGraph = createJobGraph(cpPath, restoredFileSink, 
true, sendCountMap);
 
-final Configuration config = new Configuration();
-config.setString(RestOptions.BIND_PORT, "18081-19000");
 final MiniClusterConfiguration cfg =
 new MiniClusterConfiguration.Builder()
+.withRandomPorts()
 .setNumTaskManagers(1)
 .setNumSlotsPerTaskManager(4)
-.setConfiguration(config)
 .build();
 
 try (MiniCluster miniCluster = new MiniCluster(cfg)) {
diff --git 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkITBase.java
 
b/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkITBase.java
index de8de90eb22..7577df750d5 100644
--- 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkITBase.java
+++ 
b/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/FileSinkITBase.java
@@ -18,8 +18,6 @@
 
 package org.apache.flink.connector.file.sink;
 
-import org.apache.flink.configuration.Configuration;
-import org.apache.flink.configuration.RestOptions;
 import org.apache.flink.connector.file.sink.utils.IntegerFileSinkTestDataUtils;
 import 
org.apache.flink.connector.file.sink.utils.PartSizeAndCheckpointRollingPolicy;
 import org.apache.flink.core.fs.Path;
@@ -64,13 +62,11 @@ public abstract class FileSinkITBase extends TestLogger {
 
 JobGraph jobGraph = createJobGraph(path);
 
-final Configuration config = new Configuration();
-config.setString(RestOptions.BIND_PORT, "18081-19000");
 final MiniClusterConfiguration cfg =
 new MiniClusterConfiguration.Builder()
+.withRandomPorts()
 .setNumTaskManagers(1)
 .setNumSlotsPerTaskManager(4)
-.setConfiguration(config)
 .build();
 
 try (MiniCluster miniCluster = new MiniCluster(cfg)) {
diff --git 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/writer/FileSinkMigrationITCase.java
 

[flink] branch master updated: [FLINK-27325][build] Remove custom forkCount settings

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e77cd23dff7 [FLINK-27325][build] Remove custom forkCount settings
e77cd23dff7 is described below

commit e77cd23dff79c7ff93aa0d4a187b8bcd1f617341
Author: Chesnay Schepler 
AuthorDate: Tue Apr 19 14:48:27 2022 +0200

[FLINK-27325][build] Remove custom forkCount settings
---
 flink-connectors/flink-connector-elasticsearch6/pom.xml | 14 --
 flink-connectors/flink-connector-elasticsearch7/pom.xml | 14 --
 flink-connectors/flink-connector-hbase-1.4/pom.xml  | 13 -
 flink-connectors/flink-connector-hbase-2.2/pom.xml  | 13 -
 flink-connectors/flink-connector-kafka/pom.xml  |  8 
 flink-connectors/flink-connector-pulsar/pom.xml |  8 
 flink-kubernetes/pom.xml|  7 ---
 7 files changed, 77 deletions(-)

diff --git a/flink-connectors/flink-connector-elasticsearch6/pom.xml 
b/flink-connectors/flink-connector-elasticsearch6/pom.xml
index ad428def7f0..66fc32610b7 100644
--- a/flink-connectors/flink-connector-elasticsearch6/pom.xml
+++ b/flink-connectors/flink-connector-elasticsearch6/pom.xml
@@ -169,18 +169,4 @@ under the License.
test


-
-   
-   
-   
-   org.apache.maven.plugins
-   maven-surefire-plugin
-   
-   
-   1
-   
-   
-   
-   
 
diff --git a/flink-connectors/flink-connector-elasticsearch7/pom.xml 
b/flink-connectors/flink-connector-elasticsearch7/pom.xml
index 67348d42566..e4c22827b2d 100644
--- a/flink-connectors/flink-connector-elasticsearch7/pom.xml
+++ b/flink-connectors/flink-connector-elasticsearch7/pom.xml
@@ -166,18 +166,4 @@ under the License.
test


-
-   
-   
-   
-   org.apache.maven.plugins
-   maven-surefire-plugin
-   
-   
-   1
-   
-   
-   
-   
 
diff --git a/flink-connectors/flink-connector-hbase-1.4/pom.xml 
b/flink-connectors/flink-connector-hbase-1.4/pom.xml
index 9c0479cb661..a3d961b5774 100644
--- a/flink-connectors/flink-connector-hbase-1.4/pom.xml
+++ b/flink-connectors/flink-connector-hbase-1.4/pom.xml
@@ -37,19 +37,6 @@ under the License.
1.4.3

 
-   
-   
-   
-   org.apache.maven.plugins
-   maven-surefire-plugin
-   
-   
-   1
-   
-   
-   
-   
-

 

diff --git a/flink-connectors/flink-connector-hbase-2.2/pom.xml 
b/flink-connectors/flink-connector-hbase-2.2/pom.xml
index 328f68dc44b..957fb48e91f 100644
--- a/flink-connectors/flink-connector-hbase-2.2/pom.xml
+++ b/flink-connectors/flink-connector-hbase-2.2/pom.xml
@@ -38,19 +38,6 @@ under the License.
28.1-jre

 
-   
-   
-   
-   org.apache.maven.plugins
-   maven-surefire-plugin
-   
-   
-   1
-   
-   
-   
-   
-

 

diff --git a/flink-connectors/flink-connector-kafka/pom.xml 
b/flink-connectors/flink-connector-kafka/pom.xml
index 8a78cf6e8e4..03f7663d3c7 100644
--- a/flink-connectors/flink-connector-kafka/pom.xml
+++ b/flink-connectors/flink-connector-kafka/pom.xml
@@ -279,14 +279,6 @@ under the License.



-   
-   org.apache.maven.plugins
-   maven-surefire-plugin
-   
-   
-   1
-   
-   


 
diff --git a/flink-connectors/flink-connector-pulsar/pom.xml 
b/flink-connectors/flink-connector-pulsar/pom.xml
index 0f1cc0ef2e0..d4600e92f94 100644
--- a/flink-connectors/flink-connector-pulsar/pom.xml
+++ 

[flink] 03/03: [FLINK-27317][build] Bump maven-source-plugin to 3.2.1

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 6d265875185306f0f93a1a8ba6fa996654d893a9
Author: Chesnay Schepler 
AuthorDate: Wed Apr 20 09:57:47 2022 +0200

[FLINK-27317][build] Bump maven-source-plugin to 3.2.1
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index f314eceb093..1245dd5912b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1224,7 +1224,7 @@ under the License.


org.apache.maven.plugins

maven-source-plugin
-   
2.2.1
+   3.2.1



attach-sources



[flink] 02/03: [FLINK-27317][build] Don't fork build for attaching sources

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 07c50aa64965502716ca9f71116b0024f0558eb5
Author: Chesnay Schepler 
AuthorDate: Wed Apr 20 09:57:31 2022 +0200

[FLINK-27317][build] Don't fork build for attaching sources
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 80860465742..f314eceb093 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1229,7 +1229,7 @@ under the License.


attach-sources

-   
jar
+   
jar-no-fork






[flink] 01/03: [FLINK-27317][build] Use absolute paths to .scalafmt.conf

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit fb82e94d60298ed9aa604ed97cd1f0933c70c51b
Author: Chesnay Schepler 
AuthorDate: Wed Apr 20 10:59:04 2022 +0200

[FLINK-27317][build] Use absolute paths to .scalafmt.conf
---
 flink-connectors/flink-hadoop-compatibility/pom.xml | 2 +-
 flink-connectors/flink-hcatalog/pom.xml | 2 +-
 flink-end-to-end-tests/flink-end-to-end-tests-scala/pom.xml | 2 +-
 flink-end-to-end-tests/flink-quickstart-test/pom.xml| 2 +-
 flink-examples/flink-examples-batch/pom.xml | 2 +-
 flink-examples/flink-examples-streaming/pom.xml | 2 +-
 flink-examples/flink-examples-table/pom.xml | 2 +-
 flink-libraries/flink-cep-scala/pom.xml | 2 +-
 flink-libraries/flink-gelly-examples/pom.xml| 2 +-
 flink-libraries/flink-gelly-scala/pom.xml   | 2 +-
 flink-scala/pom.xml | 2 +-
 flink-streaming-scala/pom.xml   | 2 +-
 flink-table/flink-table-api-scala-bridge/pom.xml| 2 +-
 flink-table/flink-table-api-scala/pom.xml   | 2 +-
 flink-table/flink-table-planner/pom.xml | 2 +-
 flink-tests/pom.xml | 2 +-
 16 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/flink-connectors/flink-hadoop-compatibility/pom.xml 
b/flink-connectors/flink-hadoop-compatibility/pom.xml
index 3b9fe8683e8..db4ac10b901 100644
--- a/flink-connectors/flink-hadoop-compatibility/pom.xml
+++ b/flink-connectors/flink-hadoop-compatibility/pom.xml
@@ -107,7 +107,7 @@ under the License.



${spotless.scalafmt.version}
-   
../../.scalafmt.conf
+   
${project.basedir}/../../.scalafmt.conf



${spotless.license.header}
diff --git a/flink-connectors/flink-hcatalog/pom.xml 
b/flink-connectors/flink-hcatalog/pom.xml
index 103317223b7..9f45965c1ec 100644
--- a/flink-connectors/flink-hcatalog/pom.xml
+++ b/flink-connectors/flink-hcatalog/pom.xml
@@ -104,7 +104,7 @@ under the License.



${spotless.scalafmt.version}
-   
../../.scalafmt.conf
+   
${project.basedir}/../../.scalafmt.conf



${spotless.license.header}
diff --git a/flink-end-to-end-tests/flink-end-to-end-tests-scala/pom.xml 
b/flink-end-to-end-tests/flink-end-to-end-tests-scala/pom.xml
index dbba17d0015..7f5bc1947d9 100644
--- a/flink-end-to-end-tests/flink-end-to-end-tests-scala/pom.xml
+++ b/flink-end-to-end-tests/flink-end-to-end-tests-scala/pom.xml
@@ -89,7 +89,7 @@ under the License.



${spotless.scalafmt.version}
-   
../../.scalafmt.conf
+   
${project.basedir}/../../.scalafmt.conf



${spotless.license.header}
diff --git a/flink-end-to-end-tests/flink-quickstart-test/pom.xml 
b/flink-end-to-end-tests/flink-quickstart-test/pom.xml
index 65ed53a25a5..a213a60cbb2 100644
--- a/flink-end-to-end-tests/flink-quickstart-test/pom.xml
+++ b/flink-end-to-end-tests/flink-quickstart-test/pom.xml
@@ -70,7 +70,7 @@ under the License.



${spotless.scalafmt.version}
-   
../../.scalafmt.conf
+   
${project.basedir}/../../.scalafmt.conf



${spotless.license.header}
diff --git a/flink-examples/flink-examples-batch/pom.xml 
b/flink-examples/flink-examples-batch/pom.xml
index 

[flink] branch master updated (d4d0e4a15fa -> 6d265875185)

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from d4d0e4a15fa [FLINK-27287][tests] Use random ports
 new fb82e94d602 [FLINK-27317][build] Use absolute paths to .scalafmt.conf
 new 07c50aa6496 [FLINK-27317][build] Don't fork build for attaching sources
 new 6d265875185 [FLINK-27317][build] Bump maven-source-plugin to 3.2.1

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 flink-connectors/flink-hadoop-compatibility/pom.xml | 2 +-
 flink-connectors/flink-hcatalog/pom.xml | 2 +-
 flink-end-to-end-tests/flink-end-to-end-tests-scala/pom.xml | 2 +-
 flink-end-to-end-tests/flink-quickstart-test/pom.xml| 2 +-
 flink-examples/flink-examples-batch/pom.xml | 2 +-
 flink-examples/flink-examples-streaming/pom.xml | 2 +-
 flink-examples/flink-examples-table/pom.xml | 2 +-
 flink-libraries/flink-cep-scala/pom.xml | 2 +-
 flink-libraries/flink-gelly-examples/pom.xml| 2 +-
 flink-libraries/flink-gelly-scala/pom.xml   | 2 +-
 flink-scala/pom.xml | 2 +-
 flink-streaming-scala/pom.xml   | 2 +-
 flink-table/flink-table-api-scala-bridge/pom.xml| 2 +-
 flink-table/flink-table-api-scala/pom.xml   | 2 +-
 flink-table/flink-table-planner/pom.xml | 2 +-
 flink-tests/pom.xml | 2 +-
 pom.xml | 4 ++--
 17 files changed, 18 insertions(+), 18 deletions(-)



[flink] branch master updated (ed302be6566 -> d4d0e4a15fa)

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from ed302be6566 [FLINK-27319] Duplicated '-t' option for savepoint format 
and deployment target
 add b22bd541f6f [FLINK-27287][tests] Migrate tests to MiniClusterResource
 add d4d0e4a15fa [FLINK-27287][tests] Use random ports

No new revisions were added by this update.

Summary of changes:
 .../file/sink/FileSinkCompactionSwitchITCase.java  |   5 +-
 .../flink/connector/file/sink/FileSinkITBase.java  |   6 +-
 .../file/sink/writer/FileSinkMigrationITCase.java  |   6 +-
 .../util/JobManagerWatermarkTrackerTest.java   |  49 ++---
 .../formats/avro/AvroExternalJarProgramITCase.java |   1 +
 .../minicluster/MiniClusterConfiguration.java  |  15 +++
 .../FileExecutionGraphInfoStoreTest.java   |   2 +-
 .../MemoryExecutionGraphInfoStoreITCase.java   |   1 +
 .../network/partition/FileBufferReaderITCase.java  |   3 +-
 .../runtime/minicluster/MiniClusterITCase.java |  41 
 .../CoordinatorEventsExactlyOnceITCase.java|  41 +++-
 .../flink/runtime/shuffle/ShuffleMasterTest.java   |   3 +-
 .../runtime/taskexecutor/TaskExecutorITCase.java   |  49 -
 .../OperatorEventSendingCheckpointITCase.java  |   1 +
 .../CheckpointRestoreWithUidHashITCase.java| 112 ++---
 .../DefaultSchedulerLocalRecoveryITCase.java   |   4 +-
 .../flink/test/runtime/JobGraphRunningUtil.java|   3 +-
 .../flink/test/runtime/SchedulingITCase.java   |   4 +-
 .../PipelinedRegionSchedulingITCase.java   |   3 +-
 19 files changed, 140 insertions(+), 209 deletions(-)



[flink] branch release-1.15 updated: [FLINK-27319] Duplicated '-t' option for savepoint format and deployment target

2022-04-20 Thread dwysakowicz
This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.15 by this push:
 new bacce4e25ea [FLINK-27319] Duplicated '-t' option for savepoint format 
and deployment target
bacce4e25ea is described below

commit bacce4e25ea8293005f189591f1a7393f47110f2
Author: Dawid Wysakowicz 
AuthorDate: Wed Apr 20 11:55:12 2022 +0200

[FLINK-27319] Duplicated '-t' option for savepoint format and deployment 
target
---
 .../src/main/java/org/apache/flink/client/cli/CliFrontendParser.java| 2 +-
 .../test/java/org/apache/flink/client/cli/CliFrontendSavepointTest.java | 2 +-
 .../org/apache/flink/client/cli/CliFrontendStopWithSavepointTest.java   | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java
 
b/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java
index 8cc4ad4c61b..05ac0757c1b 100644
--- 
a/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java
+++ 
b/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java
@@ -149,7 +149,7 @@ public class CliFrontendParser {
 
 static final Option SAVEPOINT_FORMAT_OPTION =
 new Option(
-"t",
+"type",
 "type",
 true,
 "Describes the binary format in which a savepoint should 
be taken. Supported"
diff --git 
a/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendSavepointTest.java
 
b/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendSavepointTest.java
index da4590efc2d..2b4c6a79a11 100644
--- 
a/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendSavepointTest.java
+++ 
b/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendSavepointTest.java
@@ -184,7 +184,7 @@ public class CliFrontendSavepointTest extends 
CliFrontendTestBase {
 
 @Test
 public void testTriggerSavepointCustomFormatShortOption() throws Exception 
{
-testTriggerSavepointCustomFormat("-t", SavepointFormatType.NATIVE);
+testTriggerSavepointCustomFormat("-type", SavepointFormatType.NATIVE);
 }
 
 @Test
diff --git 
a/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendStopWithSavepointTest.java
 
b/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendStopWithSavepointTest.java
index da4d40727d1..89716ca19f9 100644
--- 
a/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendStopWithSavepointTest.java
+++ 
b/flink-clients/src/test/java/org/apache/flink/client/cli/CliFrontendStopWithSavepointTest.java
@@ -121,7 +121,7 @@ public class CliFrontendStopWithSavepointTest extends 
CliFrontendTestBase {
 
 @Test
 public void testStopWithExplicitSavepointTypeShortOption() throws 
Exception {
-testStopWithExplicitSavepointType("-t", SavepointFormatType.NATIVE);
+testStopWithExplicitSavepointType("-type", SavepointFormatType.NATIVE);
 }
 
 @Test



[flink] branch master updated (13c45ec3866 -> ed302be6566)

2022-04-20 Thread dwysakowicz
This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 13c45ec3866 [FLINK-27298][docs] Change the used table name in the SQL 
Client documentation to the correct name. This closes #19510
 add ed302be6566 [FLINK-27319] Duplicated '-t' option for savepoint format 
and deployment target

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/flink/client/cli/CliFrontendParser.java| 2 +-
 .../test/java/org/apache/flink/client/cli/CliFrontendSavepointTest.java | 2 +-
 .../org/apache/flink/client/cli/CliFrontendStopWithSavepointTest.java   | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)



[flink] branch master updated (361678a1f03 -> 13c45ec3866)

2022-04-20 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 361678a1f03 [FLINK-27163][docs] Fix broken table layout for metrics 
due to incorrectly defined rowspan. This fixes #19420
 add 13c45ec3866 [FLINK-27298][docs] Change the used table name in the SQL 
Client documentation to the correct name. This closes #19510

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/dev/table/sqlClient.md | 2 +-
 docs/content/docs/dev/table/sqlClient.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated (9964c47082e -> 361678a1f03)

2022-04-20 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 9964c47082e [hotfix][docs-zh] Improving Chinese translation for Table 
concepts overview documentation.
 add 361678a1f03 [FLINK-27163][docs] Fix broken table layout for metrics 
due to incorrectly defined rowspan. This fixes #19420

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/ops/metrics.md | 2 +-
 docs/content/docs/ops/metrics.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated: [hotfix][docs-zh] Improving Chinese translation for Table concepts overview documentation.

2022-04-20 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 9964c47082e [hotfix][docs-zh] Improving Chinese translation for Table 
concepts overview documentation.
9964c47082e is described below

commit 9964c47082e32693aecc7947978bf94849ada8c4
Author: snailHumming 
AuthorDate: Wed Apr 20 10:16:19 2022 +0800

[hotfix][docs-zh] Improving Chinese translation for Table concepts overview 
documentation.
---
 .../content.zh/docs/dev/table/concepts/overview.md | 27 +++---
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/concepts/overview.md 
b/docs/content.zh/docs/dev/table/concepts/overview.md
index b9bd3be36d4..ed6561c8638 100644
--- a/docs/content.zh/docs/dev/table/concepts/overview.md
+++ b/docs/content.zh/docs/dev/table/concepts/overview.md
@@ -34,29 +34,33 @@ Flink 的 [Table API]({{< ref "docs/dev/table/tableApi" >}}) 
和 [SQL]({{< ref "
 
 下面这些页面包含了概念、实际的限制,以及流式数据处理中的一些特定的配置。
 
+
+
 状态管理
 
-流模式下运行的表程序利用了Flink作为有状态流处理器的所有能力。
+流模式下运行的表程序利用了 Flink 作为有状态流处理器的所有能力。
 
 事实上,一个表程序(Table program)可以配置一个 [state backend]({{< ref 
"docs/ops/state/state_backends" >}})
 和多个不同的 [checkpoint 选项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" >}})
 以处理对不同状态大小和容错需求。这可以对正在运行的 Table API & SQL 管道(pipeline)生成 
savepoint,并在这之后用其恢复应用程序的状态。
 
+
+
 ### 状态使用
 
-由于 Table API & SQL 程序是声明式的,管道内的状态会在哪以及如何被使用并不显然。 Planner 会确认是否需要状态来得到正确的计算结果,
-管道会被现有优化规则集优化成尽可能少地索要状态。
+由于 Table API & SQL 程序是声明式的,管道内的状态会在哪以及如何被使用并不明确。 Planner 会确认是否需要状态来得到正确的计算结果,
+管道会被现有优化规则集优化成尽可能少地使用状态。
 
 {{< hint info >}}
-从概念上讲, 源表从来不会在状态中被完全保存。 实现者在处理逻辑表(即[动态表]({{< ref 
"docs/dev/table/concepts/dynamic_tables" >}}))时,
+从概念上讲, 源表从来不会在状态中被完全保存。 实现者处理的是逻辑表(即[动态表]({{< ref 
"docs/dev/table/concepts/dynamic_tables" >}}))。
 它们的状态取决于用到的操作。
 {{< /hint >}}
 
-形如 `SELECT ... FROM ... WHERE` 这种只包含字段映射或过滤器的查询的查询语句通常是无状态的管道。 然而诸如 join 、
-聚合或去重操作需要在Flink抽象的容错存储内保持中间结果。
+形如 `SELECT ... FROM ... WHERE` 这种只包含字段映射或过滤器的查询的查询语句通常是无状态的管道。 然而诸如 join、
+聚合或去重操作需要在 Flink 抽象的容错存储内保存中间结果。
 
 {{< hint info >}}
-请参考独立的算子文档来获取更多关于状态需求量和限制潜在状态大小增长的信息。
+请参考独立的算子文档来获取更多关于状态需求量和限制潜在增长状态大小的信息。
 {{< /hint >}}
 
 例如对两个表进行 join 操作的普通 SQL 需要算子保存两个表的全部输入。基于正确的 SQL 语义,运行时假设两表会在任意时间点进行匹配。
@@ -73,6 +77,8 @@ SELECT sessionId, COUNT(*) FROM clicks GROUP BY sessionId;
 且 `sessionId` 的值只活跃到会话结束(即在有限的时间周期内)。然而连续查询无法得知sessionId的这个性质,
 并且预期每个 `sessionId` 值会在任何时间点上出现。这维护了每个可见的 `sessionId` 值。因此总状态量会随着 `sessionId` 
的发现不断地增长。
 
+
+
  空闲状态维持时间
 
 *空间状态位置时间*参数 [`table.exec.state.ttl`]({{< ref "docs/dev/table/config" 
>}}#table-exec-state-ttl) 
@@ -81,6 +87,8 @@ SELECT sessionId, COUNT(*) FROM clicks GROUP BY sessionId;
 通过移除状态的键,连续查询会完全忘记它曾经见过这个键。如果一个状态带有曾被移除状态的键被处理了,这条记录将被认为是
 对应键的第一条记录。上述例子中意味着 `sessionId` 会再次从 `0` 开始计数。
 
+
+
 ### 状态化更新与演化
 
 表程序在流模式下执行将被视为*标准查询*,这意味着它们被定义一次后将被一直视为静态的端到端 (end-to-end) 管道
@@ -92,7 +100,7 @@ SELECT sessionId, COUNT(*) FROM clicks GROUP BY sessionId;
 算子状态的列布局差异。
 
 查询实现者需要确保改变在优化计划前后是兼容的,在 SQL 中使用 `EXPLAIN` 或在 Table API 中使用 `table.explain()` 
-可[获取详情]({{< ref "docs/dev/table/common" >}}#explaining-a-table).
+可[获取详情]({{< ref "docs/dev/table/common" >}}#explaining-a-table)。
 
 由于新的优化器规则正不断地被添加,算子变得更加高效和专用,升级到更新的Flink版本可能造成不兼容的计划。
 
@@ -108,6 +116,9 @@ SELECT sessionId, COUNT(*) FROM clicks GROUP BY sessionId;
 由于这两个缺点(即修改查询语句和修改Flink版本),我们推荐实现调查升级后的表程序是否可以在切换到实时数据前,被历史数据"暖机"
 (即被初始化)。Flink社区正致力于 [混合源]({{< ref "docs/connectors/datastream/hybridsource" 
>}}) 来让切换变得尽可能方便。
 
+
+
+
 接下来?
 -
 



[flink] branch master updated: [FLINK-25109][table-sql-client] Update JLine3 to 3.21.0. This closes #17961

2022-04-20 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 0fed57b71ea [FLINK-25109][table-sql-client] Update JLine3 to 3.21.0. 
This closes #17961
0fed57b71ea is described below

commit 0fed57b71ea73348fa59479e2d5a1ce4e821ae5a
Author: Sergey 
AuthorDate: Tue Nov 30 12:03:24 2021 +0100

[FLINK-25109][table-sql-client] Update JLine3 to 3.21.0. This closes #17961
---
 flink-table/flink-sql-client/pom.xml| 4 ++--
 flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/flink-table/flink-sql-client/pom.xml 
b/flink-table/flink-sql-client/pom.xml
index bf93bc8b6af..e4b11598aa9 100644
--- a/flink-table/flink-sql-client/pom.xml
+++ b/flink-table/flink-sql-client/pom.xml
@@ -56,13 +56,13 @@ under the License.

org.jline
jline-terminal
-   3.9.0
+   3.21.0

 

org.jline
jline-reader
-   3.9.0
+   3.21.0

 

diff --git a/flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE 
b/flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE
index e9302d88c35..20dc0d46bad 100644
--- a/flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE
+++ b/flink-table/flink-sql-client/src/main/resources/META-INF/NOTICE
@@ -7,5 +7,5 @@ The Apache Software Foundation (http://www.apache.org/).
 This project bundles the following dependencies under the BSD license.
 See bundled license files for details.
 
-- org.jline:jline-terminal:3.9.0
-- org.jline:jline-reader:3.9.0
+- org.jline:jline-terminal:3.21.0
+- org.jline:jline-reader:3.21.0



[flink-training] branch master updated: fixup! [FLINK-25313] Enable flink runtime web-ui (#45)

2022-04-20 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-training.git


The following commit(s) were added to refs/heads/master by this push:
 new 28aa5d8  fixup! [FLINK-25313] Enable flink runtime web-ui (#45)
28aa5d8 is described below

commit 28aa5d8114f7713097379ed02eade15dbe977e8e
Author: Nico Kruber 
AuthorDate: Wed Apr 20 12:51:17 2022 +0200

fixup! [FLINK-25313] Enable flink runtime web-ui (#45)
---
 build.gradle | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/build.gradle b/build.gradle
index dbce4f3..a9e4756 100644
--- a/build.gradle
+++ b/build.gradle
@@ -87,7 +87,7 @@ subprojects {
 shadow "org.apache.flink:flink-java:${flinkVersion}"
 shadow 
"org.apache.flink:flink-streaming-java_${scalaBinaryVersion}:${flinkVersion}"
 shadow 
"org.apache.flink:flink-streaming-scala_${scalaBinaryVersion}:${flinkVersion}"
-
+
 // allows using Flink's web UI when running in the IDE:
 shadow 
"org.apache.flink:flink-runtime-web_${scalaBinaryVersion}:${flinkVersion}"
 



[flink-training] branch master updated: [FLINK-25313] Enable flink runtime web-ui (#45)

2022-04-20 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-training.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c25c54  [FLINK-25313] Enable flink runtime web-ui (#45)
9c25c54 is described below

commit 9c25c54a73578b24b15e9c52e28740460a687f93
Author: Junfan Zhang 
AuthorDate: Wed Apr 20 18:47:07 2022 +0800

[FLINK-25313] Enable flink runtime web-ui (#45)

* [FLINK-25313] Enable flink runtime web-ui when running in the IDE

With this, when running Flink applications locally, you can browse Flink's 
web UI to see more details.

Co-authored-by: Junfan Zhang 
---
 build.gradle | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/build.gradle b/build.gradle
index 9395b08..dbce4f3 100644
--- a/build.gradle
+++ b/build.gradle
@@ -87,6 +87,9 @@ subprojects {
 shadow "org.apache.flink:flink-java:${flinkVersion}"
 shadow 
"org.apache.flink:flink-streaming-java_${scalaBinaryVersion}:${flinkVersion}"
 shadow 
"org.apache.flink:flink-streaming-scala_${scalaBinaryVersion}:${flinkVersion}"
+
+// allows using Flink's web UI when running in the IDE:
+shadow 
"org.apache.flink:flink-runtime-web_${scalaBinaryVersion}:${flinkVersion}"
 
 if (project != project(":common")) {
 implementation project(path: ':common')



[flink] branch master updated (1720dd668dd -> 288f766b81a)

2022-04-20 Thread fpaul
This is an automated email from the ASF dual-hosted git repository.

fpaul pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 1720dd668dd [FLINK-27267][contrib] Migrate tests to JUnit5
 add 288f766b81a [hotfix][docs] Misprint in types.md

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/dev/table/types.md | 2 +-
 docs/content/docs/dev/table/types.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated (8a46a36b9c3 -> 1720dd668dd)

2022-04-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 8a46a36b9c3 [FLINK-27263][table] Rename the metadata column to the 
user specified name in DDL
 add 1720dd668dd [FLINK-27267][contrib] Migrate tests to JUnit5

No new revisions were added by this update.

Summary of changes:
 .../wikiedits/WikipediaEditsSourceTest.java| 30 +++---
 .../org.junit.jupiter.api.extension.Extension  |  0
 2 files changed, 15 insertions(+), 15 deletions(-)
 copy {flink-container => 
flink-contrib/flink-connector-wikiedits}/src/test/resources/META-INF/services/org.junit.jupiter.api.extension.Extension
 (100%)



[flink-kubernetes-operator] branch main updated: [FLINK-27023] Unify flink and operator configuration

2022-04-20 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 805fef5  [FLINK-27023] Unify flink and operator configuration
805fef5 is described below

commit 805fef5849479b2777ad09258e42a474a32ce058
Author: Gyula Fora 
AuthorDate: Tue Apr 19 12:47:21 2022 +0200

[FLINK-27023] Unify flink and operator configuration
---
 docs/content/docs/development/guide.md | 12 ++--
 docs/content/docs/operations/configuration.md  | 42 +++---
 docs/content/docs/operations/helm.md   |  4 +-
 docs/content/docs/operations/metrics-logging.md| 37 ++--
 examples/kustomize/values.yaml |  4 +-
 .../flink/kubernetes/operator/FlinkOperator.java   | 34 ---
 .../kubernetes/operator/config/DefaultConfig.java  | 41 --
 .../config/FlinkOperatorConfiguration.java | 23 +---
 ...s.java => KubernetesOperatorConfigOptions.java} | 18 +++---
 .../operator/metrics/OperatorMetricUtils.java  | 40 +++--
 .../operator/reconciler/ReconciliationUtils.java   |  6 +-
 .../flink/kubernetes/operator/utils/EnvUtils.java  |  2 -
 .../kubernetes/operator/utils/FlinkUtils.java  | 16 --
 .../operator/validation/DefaultValidator.java  |  4 +-
 .../kubernetes/operator/FlinkOperatorTest.java | 12 ++--
 .../flink/kubernetes/operator/TestUtils.java   | 13 ++---
 .../operator/controller/RollbackTest.java  |  8 ++-
 .../operator/metrics/OperatorMetricUtilsTest.java  | 48 
 .../sessionjob/SessionJobObserverTest.java | 15 ++---
 .../{flink-default-config => }/flink-conf.yaml | 14 +
 .../conf/flink-operator-config/flink-conf.yaml | 29 --
 .../log4j-console.properties   |  0
 ...log4j2.properties => log4j-operator.properties} |  0
 .../templates/flink-operator.yaml  | 66 +++---
 helm/flink-kubernetes-operator/values.yaml | 28 -
 25 files changed, 253 insertions(+), 263 deletions(-)

diff --git a/docs/content/docs/development/guide.md 
b/docs/content/docs/development/guide.md
index d819425..b5b579d 100644
--- a/docs/content/docs/development/guide.md
+++ b/docs/content/docs/development/guide.md
@@ -31,7 +31,7 @@ We gathered a set of best practices here to aid development.
 ## Local environment setup
 
 We recommend you install [Docker 
Desktop](https://www.docker.com/products/docker-desktop), 
[minikube](https://minikube.sigs.k8s.io/docs/start/)
-and [helm](https://helm.sh/docs/intro/quickstart/) on your local machine. For 
the setup please refer to our 
+and [helm](https://helm.sh/docs/intro/quickstart/) on your local machine. For 
the setup please refer to our
 [quickstart]({{< ref "docs/try-flink-kubernetes-operator/quick-start" 
>}}#prerequisites).
 
 ### Building docker images
@@ -58,7 +58,7 @@ When you want to reset your environment to the defaults you 
can do the following
 eval $(minikube docker-env --unset)
 ```
 
-The most useful insight when it comes to minikube that it is just a docker 
container on your local machine and you can 
+The most useful insight when it comes to minikube that it is just a docker 
container on your local machine and you can
 ssh to it with the following command in case you needed to hack something 
there (like adding a hostpath mount or modifying docker images).
 
 ```bash
@@ -101,7 +101,7 @@ helm uninstall flink-kubernetes-operator
 ### Generating and Upgrading the CRD
 
 By default, the CRD is generated by the [Fabric8 
CRDGenerator](https://github.com/fabric8io/kubernetes-client/blob/master/doc/CRD-generator.md),
 when building from source.
-When installing flink-kubernetes-operator for the first time, the CRD will be 
applied to the kubernetes cluster automatically. But it will not be removed or 
upgraded when re-installing the flink-kubernetes-operator, as described in the 
relevant helm 
[documentation](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/).
 
+When installing flink-kubernetes-operator for the first time, the CRD will be 
applied to the kubernetes cluster automatically. But it will not be removed or 
upgraded when re-installing the flink-kubernetes-operator, as described in the 
relevant helm 
[documentation](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/).
 So if the CRD is changed, you have to delete the CRD resource manually, and 
re-install the flink-kubernetes-operator.
 
 ```bash
@@ -110,10 +110,10 @@ kubectl delete crd flinkdeployments.flink.apache.org
 
 ### Mounts
 
-The operator supports to specify the volume mounts. The default mounts to 
hostPath can be activated by the following command. You can change the default 
mounts in the `helm/flink-operator/values.yaml`
+The operator supports to specify the volume 

[flink] branch release-1.14 updated: [FLINK-25694][Filesystem][S3] Upgrade Presto to resolve GSON/Alluxio Vulnerability. This closes #19478

2022-04-20 Thread martijnvisser
This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
 new 9a5d8b090f7 [FLINK-25694][Filesystem][S3] Upgrade Presto to resolve 
GSON/Alluxio Vulnerability. This closes #19478
9a5d8b090f7 is described below

commit 9a5d8b090f75262f216acbf89f240151327f9ecb
Author: David N Perkins 
AuthorDate: Thu Apr 14 15:15:42 2022 -0400

[FLINK-25694][Filesystem][S3] Upgrade Presto to resolve GSON/Alluxio 
Vulnerability. This closes #19478

Signed-off-by: David N Perkins 
---
 flink-filesystems/flink-s3-fs-presto/pom.xml  |  2 +-
 .../flink-s3-fs-presto/src/main/resources/META-INF/NOTICE | 11 ++-
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/flink-filesystems/flink-s3-fs-presto/pom.xml 
b/flink-filesystems/flink-s3-fs-presto/pom.xml
index e8d4fef8175..8e1fafe0773 100644
--- a/flink-filesystems/flink-s3-fs-presto/pom.xml
+++ b/flink-filesystems/flink-s3-fs-presto/pom.xml
@@ -33,7 +33,7 @@ under the License.
jar
 

-   0.257
+   0.272

 

diff --git 
a/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE
index 124b863dcb2..d8464cfbedf 100644
--- a/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE
+++ b/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE
@@ -17,10 +17,10 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - com.amazonaws:aws-java-sdk-s3:1.11.951
 - com.amazonaws:aws-java-sdk-sts:1.11.951
 - com.amazonaws:jmespath-java:1.11.951
-- com.facebook.presto:presto-common:0.257
-- com.facebook.presto:presto-hive:0.257
-- com.facebook.presto:presto-hive-common:0.257
-- com.facebook.presto:presto-hive-metastore:0.257
+- com.facebook.presto:presto-common:0.272
+- com.facebook.presto:presto-hive:0.272
+- com.facebook.presto:presto-hive-common:0.272
+- com.facebook.presto:presto-hive-metastore:0.272
 - com.facebook.presto.hadoop:hadoop-apache2:2.7.3-1
 - com.fasterxml.jackson.core:jackson-annotations:2.13.2
 - com.fasterxml.jackson.core:jackson-core:2.13.2
@@ -35,7 +35,7 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - io.airlift:units:1.3
 - io.airlift:slice:0.38
 - joda-time:joda-time:2.5
-- org.alluxio:alluxio-shaded-client:2.5.0-3
+- org.alluxio:alluxio-shaded-client:2.7.3
 - org.apache.commons:commons-configuration2:2.1.1
 - org.apache.commons:commons-lang3:3.3.2
 - org.apache.commons:commons-text:1.4
@@ -46,6 +46,7 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - org.apache.htrace:htrace-core4:4.1.0-incubating
 - org.apache.httpcomponents:httpclient:4.5.13
 - org.apache.httpcomponents:httpcore:4.4.14
+- org.apache.hudi:hudi-presto-bundle:0.10.1
 - org.weakref:jmxutils:1.19
 - software.amazon.ion:ion-java:1.0.2
 



[flink-kubernetes-operator] branch main updated (dc1cca8 -> 6e1f0e8)

2022-04-20 Thread wangyang0918
This is an automated email from the ASF dual-hosted git repository.

wangyang0918 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


from dc1cca8  [FLINK-27289] Avoid calling waitForClusterShutdown twice when 
stopping session cluster with deleting HA data
 new e0e34cb  [FLINK-27310] Fix FlinkOperatorITCase
 new 6e1f0e8  [FLINK-27310] Improve github CI for integration tests

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .github/workflows/ci.yml  | 4 ++--
 .gitignore| 1 +
 .../org/apache/flink/kubernetes/operator/FlinkOperatorITCase.java | 1 +
 flink-kubernetes-operator/src/test/resources/log4j2-test.properties   | 2 +-
 pom.xml   | 1 +
 5 files changed, 6 insertions(+), 3 deletions(-)



[flink-kubernetes-operator] 02/02: [FLINK-27310] Improve github CI for integration tests

2022-04-20 Thread wangyang0918
This is an automated email from the ASF dual-hosted git repository.

wangyang0918 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit 6e1f0e8c28986b515f7becb9968b9af25c8f4b6a
Author: bgeng777 
AuthorDate: Tue Apr 19 22:16:07 2022 +0800

[FLINK-27310] Improve github CI for integration tests
---
 .github/workflows/ci.yml | 4 ++--
 .gitignore   | 1 +
 pom.xml  | 1 +
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index e9acdd3..388d04d 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -73,12 +73,12 @@ jobs:
   - name: Tests in flink-kubernetes-operator
 run: |
   cd flink-kubernetes-operator
-  mvn integration-test -Dit.skip=false
+  mvn verify -Dit.skip=false
   cd ..
   - name: Tests in flink-kubernetes-webhook
 run: |
   cd flink-kubernetes-webhook
-  mvn integration-test -Dit.skip=false
+  mvn verify -Dit.skip=false
   cd ..
   - name: Stop the operator
 run: |
diff --git a/.gitignore b/.gitignore
index 5e96d2c..bb85f4b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -35,3 +35,4 @@ buildNumber.properties
 
 .idea
 *.iml
+*.DS_Store
diff --git a/pom.xml b/pom.xml
index 55c0808..380e0ee 100644
--- a/pom.xml
+++ b/pom.xml
@@ -137,6 +137,7 @@ under the License.
 
 
 integration-test
+verify
 
 
 



[flink-kubernetes-operator] 01/02: [FLINK-27310] Fix FlinkOperatorITCase

2022-04-20 Thread wangyang0918
This is an automated email from the ASF dual-hosted git repository.

wangyang0918 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit e0e34cb481af597c2126760a0f4c81bacb7ea284
Author: bgeng777 
AuthorDate: Tue Apr 19 21:14:12 2022 +0800

[FLINK-27310] Fix FlinkOperatorITCase
---
 .../java/org/apache/flink/kubernetes/operator/FlinkOperatorITCase.java  | 1 +
 flink-kubernetes-operator/src/test/resources/log4j2-test.properties | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/FlinkOperatorITCase.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/FlinkOperatorITCase.java
index fa0e13d..3298da6 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/FlinkOperatorITCase.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/FlinkOperatorITCase.java
@@ -114,6 +114,7 @@ public class FlinkOperatorITCase {
 resource.setCpu(1);
 JobManagerSpec jm = new JobManagerSpec();
 jm.setResource(resource);
+jm.setReplicas(1);
 spec.setJobManager(jm);
 TaskManagerSpec tm = new TaskManagerSpec();
 tm.setResource(resource);
diff --git 
a/flink-kubernetes-operator/src/test/resources/log4j2-test.properties 
b/flink-kubernetes-operator/src/test/resources/log4j2-test.properties
index 3ab9c68..d6f4c40 100644
--- a/flink-kubernetes-operator/src/test/resources/log4j2-test.properties
+++ b/flink-kubernetes-operator/src/test/resources/log4j2-test.properties
@@ -34,7 +34,7 @@
 # limitations under the License.
 

 
-rootLogger.level = OFF
+rootLogger.level = INFO
 rootLogger.appenderRef.console.ref = ConsoleAppender
 
 # Log all infos to the console



[flink-kubernetes-operator] branch main updated: [FLINK-27289] Avoid calling waitForClusterShutdown twice when stopping session cluster with deleting HA data

2022-04-20 Thread wangyang0918
This is an automated email from the ASF dual-hosted git repository.

wangyang0918 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new dc1cca8  [FLINK-27289] Avoid calling waitForClusterShutdown twice when 
stopping session cluster with deleting HA data
dc1cca8 is described below

commit dc1cca82c31feb97ea960a919b90d711f2a1de92
Author: lz <971066...@qq.com>
AuthorDate: Mon Apr 18 18:11:30 2022 +0800

[FLINK-27289] Avoid calling waitForClusterShutdown twice when stopping 
session cluster with deleting HA data

This closes #171.
---
 .../kubernetes/operator/reconciler/deployment/SessionReconciler.java  | 4 
 .../org/apache/flink/kubernetes/operator/service/FlinkService.java| 1 -
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/SessionReconciler.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/SessionReconciler.java
index b904a74..4f83bc7 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/SessionReconciler.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/SessionReconciler.java
@@ -94,6 +94,10 @@ public class SessionReconciler extends 
AbstractDeploymentReconciler {
 effectiveConfig,
 false,
 
operatorConfiguration.getFlinkShutdownClusterTimeout().toSeconds());
+FlinkUtils.waitForClusterShutdown(
+kubernetesClient,
+effectiveConfig,
+
operatorConfiguration.getFlinkShutdownClusterTimeout().toSeconds());
 flinkService.submitSessionCluster(effectiveConfig);
 
status.setJobManagerDeploymentStatus(JobManagerDeploymentStatus.DEPLOYING);
 IngressUtils.updateIngressRules(objectMeta, deploySpec, 
effectiveConfig, kubernetesClient);
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/FlinkService.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/FlinkService.java
index dfd5084..48bf185 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/FlinkService.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/FlinkService.java
@@ -404,7 +404,6 @@ public class FlinkService {
 public void stopSessionCluster(
 ObjectMeta objectMeta, Configuration conf, boolean deleteHaData, 
long shutdownTimeout) {
 FlinkUtils.deleteCluster(objectMeta, kubernetesClient, deleteHaData, 
shutdownTimeout);
-FlinkUtils.waitForClusterShutdown(kubernetesClient, conf, 
shutdownTimeout);
 }
 
 public void triggerSavepoint(



[flink-table-store] branch release-0.1 updated: [FLINK-27283] Add end to end tests for table store

2022-04-20 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.1
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.1 by this push:
 new e54e4bc  [FLINK-27283] Add end to end tests for table store
e54e4bc is described below

commit e54e4bc310cbcf72bbec564d3e51556c11f69605
Author: tsreaper 
AuthorDate: Wed Apr 20 15:17:22 2022 +0800

[FLINK-27283] Add end to end tests for table store

This closes #95
---
 .../workflows/{java8-build.yml => e2e-tests.yml}   |   7 +-
 .github/workflows/java8-build.yml  |   3 +-
 flink-table-store-e2e-tests/pom.xml| 123 
 .../flink/table/store/tests/E2eTestBase.java   | 313 +
 .../table/store/tests/FileStoreBatchE2eTest.java   | 171 +++
 .../table/store/tests/FileStoreStreamE2eTest.java  |  99 +++
 .../flink/table/store/tests/LogStoreE2eTest.java   | 159 +++
 .../table/store/tests/utils/ParameterProperty.java |  43 +++
 .../flink/table/store/tests/utils/TestUtils.java   |  82 ++
 .../src/test/resources-filtered/project.properties |  19 ++
 .../src/test/resources/log4j2-test.properties  |  28 ++
 pom.xml|   1 +
 12 files changed, 1043 insertions(+), 5 deletions(-)

diff --git a/.github/workflows/java8-build.yml b/.github/workflows/e2e-tests.yml
similarity index 66%
copy from .github/workflows/java8-build.yml
copy to .github/workflows/e2e-tests.yml
index d68d465..c7cd479 100644
--- a/.github/workflows/java8-build.yml
+++ b/.github/workflows/e2e-tests.yml
@@ -1,4 +1,4 @@
-name: Java 8 Build
+name: End to End Tests
 
 on: [push, pull_request]
 
@@ -14,5 +14,6 @@ jobs:
 with:
   java-version: 1.8
   - name: Build
-run: mvn clean install
-
+run: mvn clean install -DskipTests
+  - name: Test
+run: mvn test -pl flink-table-store-e2e-tests
diff --git a/.github/workflows/java8-build.yml 
b/.github/workflows/java8-build.yml
index d68d465..26977d5 100644
--- a/.github/workflows/java8-build.yml
+++ b/.github/workflows/java8-build.yml
@@ -14,5 +14,4 @@ jobs:
 with:
   java-version: 1.8
   - name: Build
-run: mvn clean install
-
+run: mvn clean install -pl '!flink-table-store-e2e-tests'
diff --git a/flink-table-store-e2e-tests/pom.xml 
b/flink-table-store-e2e-tests/pom.xml
new file mode 100644
index 000..763442c
--- /dev/null
+++ b/flink-table-store-e2e-tests/pom.xml
@@ -0,0 +1,123 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+flink-table-store-parent
+org.apache.flink
+0.1-SNAPSHOT
+
+
+flink-table-store-e2e-tests
+Flink Table Store : End to End Tests
+
+
+
+org.apache.flink
+flink-table-store-dist
+${project.version}
+
+
+
+
+
+org.apache.flink
+flink-core
+${flink.version}
+test
+
+
+
+org.apache.flink
+flink-test-utils-junit
+${flink.version}
+test
+
+
+
+
+
+org.testcontainers
+testcontainers
+${testcontainers.version}
+test
+
+
+
+org.testcontainers
+kafka
+${testcontainers.version}
+test
+
+
+
+
+
+
+org.apache.maven.plugins
+maven-dependency-plugin
+
+
+copy-jars
+process-resources
+
+copy
+
+
+
+
+
+
+org.apache.flink
+flink-table-store-dist
+${project.version}
+flink-table-store.jar
+jar
+
${project.build.directory}/dependencies
+
+
+
+org.apache.flink
+flink-shaded-hadoop-2-uber
+2.8.3-10.0
+bundled-hadoop.jar
+jar
+
${project.build.directory}/dependencies
+
+
+
+
+
+
+
+
+src/test/resources
+

[flink] branch master updated: [FLINK-27263][table] Rename the metadata column to the user specified name in DDL

2022-04-20 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 8a46a36b9c3 [FLINK-27263][table] Rename the metadata column to the 
user specified name in DDL
8a46a36b9c3 is described below

commit 8a46a36b9c3eaf88da7e139e52e5c9d987f98a3d
Author: Shengkai <1059623...@qq.com>
AuthorDate: Fri Apr 15 16:58:57 2022 +0800

[FLINK-27263][table] Rename the metadata column to the user specified name 
in DDL

This closes #19521.
---
 .../flink/table/catalog/DefaultSchemaResolver.java | 28 
 .../flink/table/catalog/SchemaResolutionTest.java  | 20 +
 .../sink/abilities/SupportsWritingMetadata.java| 16 ---
 .../source/abilities/SupportsReadingMetadata.java  | 11 +++--
 .../table/planner/connectors/DynamicSinkUtils.java | 47 
 .../planner/connectors/DynamicSourceUtils.java | 47 ++--
 .../PushProjectIntoTableSourceScanRule.java| 30 ++---
 .../PushProjectIntoTableSourceScanRuleTest.java| 14 --
 .../file/table/FileSystemTableSourceTest.xml   |  4 +-
 .../planner/plan/batch/sql/TableSourceTest.xml |  4 +-
 .../testWritingMetadata.out|  2 +-
 .../testReadingMetadata.out| 10 ++---
 .../PushProjectIntoTableSourceScanRuleTest.xml | 17 
 .../PushWatermarkIntoTableSourceScanRuleTest.xml   |  4 +-
 .../PushLocalAggIntoTableSourceScanRuleTest.xml|  2 +-
 .../plan/stream/sql/SourceWatermarkTest.xml|  4 +-
 .../planner/plan/stream/sql/TableScanTest.xml  | 12 ++---
 .../planner/plan/stream/sql/TableSinkTest.xml  | 40 -
 .../planner/plan/stream/sql/TableSourceTest.xml|  4 +-
 .../planner/plan/stream/sql/TableScanTest.scala| 51 ++
 .../runtime/stream/sql/TableSourceITCase.scala | 14 ++
 21 files changed, 278 insertions(+), 103 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/DefaultSchemaResolver.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/DefaultSchemaResolver.java
index 0b68aeff79b..f2f011e22c2 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/DefaultSchemaResolver.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/DefaultSchemaResolver.java
@@ -43,6 +43,7 @@ import javax.annotation.Nullable;
 
 import java.util.Arrays;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Objects;
@@ -95,6 +96,7 @@ class DefaultSchemaResolver implements SchemaResolver {
 private List resolveColumns(List 
unresolvedColumns) {
 
 validateDuplicateColumns(unresolvedColumns);
+validateDuplicateMetadataKeys(unresolvedColumns);
 
 final Column[] resolvedColumns = new Column[unresolvedColumns.size()];
 // process source columns first before computed columns
@@ -175,6 +177,32 @@ class DefaultSchemaResolver implements SchemaResolver {
 }
 }
 
+private void validateDuplicateMetadataKeys(List 
columns) {
+Map metadataKeyToColumnNames = new HashMap<>();
+for (Schema.UnresolvedColumn column : columns) {
+if (!(column instanceof UnresolvedMetadataColumn)) {
+continue;
+}
+
+UnresolvedMetadataColumn metadataColumn = 
(UnresolvedMetadataColumn) column;
+String metadataKey =
+metadataColumn.getMetadataKey() == null
+? metadataColumn.getName()
+: metadataColumn.getMetadataKey();
+if (metadataKeyToColumnNames.containsKey(metadataKey)) {
+throw new ValidationException(
+String.format(
+"The column `%s` and `%s` in the table are 
both from the same metadata key '%s'. "
++ "Please specify one of the columns 
as the metadata column and use the "
++ "computed column syntax to specify 
the others.",
+metadataKeyToColumnNames.get(metadataKey),
+metadataColumn.getName(),
+metadataKey));
+}
+metadataKeyToColumnNames.put(metadataKey, 
metadataColumn.getName());
+}
+}
+
 private List resolveWatermarkSpecs(
 List unresolvedWatermarkSpecs, 
List inputColumns) {
 if (unresolvedWatermarkSpecs.size() == 0) {
diff --git 
a/flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/SchemaResolutionTest.java
 

[flink-table-store] branch master updated: [FLINK-27283] Add end to end tests for table store

2022-04-20 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 7dc3225  [FLINK-27283] Add end to end tests for table store
7dc3225 is described below

commit 7dc3225c5390b8c00651f0272764268ea08f128c
Author: tsreaper 
AuthorDate: Wed Apr 20 14:19:10 2022 +0800

[FLINK-27283] Add end to end tests for table store

This closes #93
---
 .../workflows/{java8-build.yml => e2e-tests.yml}   |   7 +-
 .github/workflows/java8-build.yml  |   3 +-
 flink-table-store-e2e-tests/pom.xml| 123 
 .../flink/table/store/tests/E2eTestBase.java   | 313 +
 .../table/store/tests/FileStoreBatchE2eTest.java   | 171 +++
 .../store/tests/FileStoreFlinkFormatE2eTest.java   |  63 +
 .../table/store/tests/FileStoreStreamE2eTest.java  |  99 +++
 .../flink/table/store/tests/LogStoreE2eTest.java   | 159 +++
 .../table/store/tests/utils/ParameterProperty.java |  43 +++
 .../flink/table/store/tests/utils/TestUtils.java   |  82 ++
 .../src/test/resources-filtered/project.properties |  19 ++
 .../src/test/resources/log4j2-test.properties  |  28 ++
 pom.xml|   1 +
 13 files changed, 1106 insertions(+), 5 deletions(-)

diff --git a/.github/workflows/java8-build.yml b/.github/workflows/e2e-tests.yml
similarity index 66%
copy from .github/workflows/java8-build.yml
copy to .github/workflows/e2e-tests.yml
index d68d465..c7cd479 100644
--- a/.github/workflows/java8-build.yml
+++ b/.github/workflows/e2e-tests.yml
@@ -1,4 +1,4 @@
-name: Java 8 Build
+name: End to End Tests
 
 on: [push, pull_request]
 
@@ -14,5 +14,6 @@ jobs:
 with:
   java-version: 1.8
   - name: Build
-run: mvn clean install
-
+run: mvn clean install -DskipTests
+  - name: Test
+run: mvn test -pl flink-table-store-e2e-tests
diff --git a/.github/workflows/java8-build.yml 
b/.github/workflows/java8-build.yml
index d68d465..26977d5 100644
--- a/.github/workflows/java8-build.yml
+++ b/.github/workflows/java8-build.yml
@@ -14,5 +14,4 @@ jobs:
 with:
   java-version: 1.8
   - name: Build
-run: mvn clean install
-
+run: mvn clean install -pl '!flink-table-store-e2e-tests'
diff --git a/flink-table-store-e2e-tests/pom.xml 
b/flink-table-store-e2e-tests/pom.xml
new file mode 100644
index 000..92c6124
--- /dev/null
+++ b/flink-table-store-e2e-tests/pom.xml
@@ -0,0 +1,123 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+
+flink-table-store-parent
+org.apache.flink
+0.2-SNAPSHOT
+
+
+flink-table-store-e2e-tests
+Flink Table Store : End to End Tests
+
+
+
+org.apache.flink
+flink-table-store-dist
+${project.version}
+
+
+
+
+
+org.apache.flink
+flink-core
+${flink.version}
+test
+
+
+
+org.apache.flink
+flink-test-utils-junit
+${flink.version}
+test
+
+
+
+
+
+org.testcontainers
+testcontainers
+${testcontainers.version}
+test
+
+
+
+org.testcontainers
+kafka
+${testcontainers.version}
+test
+
+
+
+
+
+
+org.apache.maven.plugins
+maven-dependency-plugin
+
+
+copy-jars
+process-resources
+
+copy
+
+
+
+
+
+
+org.apache.flink
+flink-table-store-dist
+${project.version}
+flink-table-store.jar
+jar
+
${project.build.directory}/dependencies
+
+
+
+org.apache.flink
+flink-shaded-hadoop-2-uber
+2.8.3-10.0
+bundled-hadoop.jar
+jar
+
${project.build.directory}/dependencies
+
+
+
+
+
+
+
+
+