[flink-table-store] branch release-0.2 updated: [hotfix] Fix Q1 to 150 bytes

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new d5ee14c4 [hotfix] Fix Q1 to 150 bytes
d5ee14c4 is described below

commit d5ee14c4bfd690f728eb45fead18e055a7070b81
Author: JingsongLi 
AuthorDate: Thu Aug 25 12:37:35 2022 +0800

[hotfix] Fix Q1 to 150 bytes
---
 flink-table-store-benchmark/README.md | 2 +-
 flink-table-store-benchmark/src/main/resources/queries/q1.sql | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/flink-table-store-benchmark/README.md 
b/flink-table-store-benchmark/README.md
index a9b9416d..5c8467c4 100644
--- a/flink-table-store-benchmark/README.md
+++ b/flink-table-store-benchmark/README.md
@@ -37,7 +37,7 @@ This is the benchmark module for Flink Table Store. Inspired 
by [Nexmark](https:
 
 |#|Description|
 |---|---|
-|q1|Test insert and update random primary keys with normal record size (100 
bytes per record). Mimics the update of uv and pv of items in an E-commercial 
website.|
+|q1|Test insert and update random primary keys with normal record size (150 
bytes per record). Mimics the update of uv and pv of items in an E-commercial 
website.|
 
 ## Benchmark Results
 
diff --git a/flink-table-store-benchmark/src/main/resources/queries/q1.sql 
b/flink-table-store-benchmark/src/main/resources/queries/q1.sql
index 3c9ff963..f756b252 100644
--- a/flink-table-store-benchmark/src/main/resources/queries/q1.sql
+++ b/flink-table-store-benchmark/src/main/resources/queries/q1.sql
@@ -15,7 +15,7 @@
 -- limitations under the License.
 
 -- Mimics the update of uv and pv of items in an E-commercial website.
--- Primary keys ranges from 0 to 10^8; Each record is about 100 bytes.
+-- Primary keys ranges from 0 to 10^8; Each record is about 150 bytes.
 
 CREATE TABLE item_uv_pv_1d_source (
 `item_id` BIGINT,



[flink-table-store] branch master updated: [hotfix] Fix Q1 to 150 bytes

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 58182ba7 [hotfix] Fix Q1 to 150 bytes
58182ba7 is described below

commit 58182ba7be9785eece09fc42dd2363ce0dfc2ff8
Author: JingsongLi 
AuthorDate: Thu Aug 25 12:37:35 2022 +0800

[hotfix] Fix Q1 to 150 bytes
---
 flink-table-store-benchmark/README.md | 2 +-
 flink-table-store-benchmark/src/main/resources/queries/q1.sql | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/flink-table-store-benchmark/README.md 
b/flink-table-store-benchmark/README.md
index a9b9416d..5c8467c4 100644
--- a/flink-table-store-benchmark/README.md
+++ b/flink-table-store-benchmark/README.md
@@ -37,7 +37,7 @@ This is the benchmark module for Flink Table Store. Inspired 
by [Nexmark](https:
 
 |#|Description|
 |---|---|
-|q1|Test insert and update random primary keys with normal record size (100 
bytes per record). Mimics the update of uv and pv of items in an E-commercial 
website.|
+|q1|Test insert and update random primary keys with normal record size (150 
bytes per record). Mimics the update of uv and pv of items in an E-commercial 
website.|
 
 ## Benchmark Results
 
diff --git a/flink-table-store-benchmark/src/main/resources/queries/q1.sql 
b/flink-table-store-benchmark/src/main/resources/queries/q1.sql
index 3c9ff963..f756b252 100644
--- a/flink-table-store-benchmark/src/main/resources/queries/q1.sql
+++ b/flink-table-store-benchmark/src/main/resources/queries/q1.sql
@@ -15,7 +15,7 @@
 -- limitations under the License.
 
 -- Mimics the update of uv and pv of items in an E-commercial website.
--- Primary keys ranges from 0 to 10^8; Each record is about 100 bytes.
+-- Primary keys ranges from 0 to 10^8; Each record is about 150 bytes.
 
 CREATE TABLE item_uv_pv_1d_source (
 `item_id` BIGINT,



[flink-table-store] branch master updated: [FLINK-27739] Update table store benchmark usage

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 8815608e [FLINK-27739] Update table store benchmark usage
8815608e is described below

commit 8815608e0a695985aff1de7c68601407272256b0
Author: tsreaper 
AuthorDate: Thu Aug 25 12:32:56 2022 +0800

[FLINK-27739] Update table store benchmark usage

This closes #274
---
 flink-table-store-benchmark/README.md  |  18 +-
 .../flink/table/store/benchmark/Benchmark.java |  87 +-
 .../table/store/benchmark/BenchmarkOptions.java|   6 -
 .../apache/flink/table/store/benchmark/Query.java  |  96 ++-
 .../flink/table/store/benchmark/QueryRunner.java   |  84 ++
 .../store/benchmark/metric/BenchmarkMetric.java|  50 ++
 .../store/benchmark/metric/FlinkRestClient.java|  83 +++---
 .../store/benchmark/metric/JobBenchmarkMetric.java |  71 +++-
 .../store/benchmark/metric/MetricReporter.java |  65 
 .../store/benchmark/metric/bytes/BpsMetric.java| 165 ---
 .../benchmark/metric/bytes/TotalBytesMetric.java   | 162 --
 .../utils/BenchmarkGlobalConfiguration.java|   9 +-
 .../src/main/resources/conf/benchmark.yaml |   3 +-
 .../src/main/resources/queries/q1.sql  |   4 +-
 .../src/main/resources/queries/q2.sql  | 179 
 .../src/main/resources/queries/q3.sql  |  92 ---
 .../src/main/resources/queries/q4.sql  | 181 -
 .../src/main/resources/queries/queries.yaml|   9 +-
 18 files changed, 304 insertions(+), 1060 deletions(-)

diff --git a/flink-table-store-benchmark/README.md 
b/flink-table-store-benchmark/README.md
index d9dda83c..a9b9416d 100644
--- a/flink-table-store-benchmark/README.md
+++ b/flink-table-store-benchmark/README.md
@@ -9,12 +9,13 @@ This is the benchmark module for Flink Table Store. Inspired 
by [Nexmark](https:
   * Two worker nodes with 16 cores and 64GB RAM.
 * This benchmark runs on a standalone Flink cluster. Download Flink >= 1.15 
from the [Apache Flink's 
website](https://flink.apache.org/downloads.html#apache-flink-1150) and setup a 
standalone cluster. Flink's job manager must be on the master node of your EMR 
cluster. We recommend the following Flink configurations:
 ```yaml
-jobmanager.memory.process.size: 8192m
+jobmanager.memory.process.size: 4096m
 taskmanager.memory.process.size: 4096m
 taskmanager.numberOfTaskSlots: 1
 parallelism.default: 16
 execution.checkpointing.interval: 3min
 state.backend: rocksdb
+state.backend.incremental: true
 ```
 With this Flink configuration, you'll need 16 task manager instances in 
total, 8 on each EMR worker.
 * This benchmark needs the `FLINK_HOME` environment variable. Set `FLINK_HOME` 
to your Flink directory.
@@ -29,33 +30,30 @@ This is the benchmark module for Flink Table Store. 
Inspired by [Nexmark](https:
 * Run `flink-table-store-benchmark/bin/setup_cluster.sh` in master node. This 
activates the CPU metrics collector in worker nodes. Note that if you restart 
your Flink cluster, you must also restart the CPU metrics collectors. To stop 
CPU metrics collectors, run 
`flink-table-store-benchmark/bin/shutdown_cluster.sh` in master node.
 
 ### Run Benchmark
-* Run `flink-table-store-benchmark/bin/run_benchmark.sh  ` to run 
`` for ``. Currently `` can be `q1`, `q2` or `all`, and 
sink can only be `table_store`.
+* Run `flink-table-store-benchmark/bin/run_benchmark.sh  ` to run 
`` for ``. Currently `` can be `q1` or `all`, and sink can 
only be `table_store`.
 * By default, each query writes for 30 minutes and then reads all records back 
from the sink to measure read throughput. 
 
 ## Queries
 
 |#|Description|
 |---|---|
-|q1|Test insert and update random primary keys with small record size (100 
bytes per record).|
-|q2|Test insert and update random primary keys with large record size (1500 
bytes per record).|
-|q3|Test insert and update primary keys related with time with small record 
size (100 bytes per record).|
-|q4|Test insert and update primary keys related with time with large record 
size (1500 bytes per record).|
+|q1|Test insert and update random primary keys with normal record size (100 
bytes per record). Mimics the update of uv and pv of items in an E-commercial 
website.|
 
 ## Benchmark Results
 
 Results of each query consist of the following aspects:
-* Throughput (byte/s): Average number of bytes inserted into the sink per 
second.
-* Total Bytes: Total number of bytes written during the given time.
+* Throughput (rows/s): Average number of rows inserted into the sink per 
second.
+* Total Rows: Total number of rows written.
 * Cores: Average CPU cost.
 * Throughput/Cores: Number of 

[flink-table-store] branch release-0.2 updated: [FLINK-27739] Update table store benchmark usage

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new 2030e273 [FLINK-27739] Update table store benchmark usage
2030e273 is described below

commit 2030e273cc685d6786d55af787b3b4020f3b8914
Author: tsreaper 
AuthorDate: Thu Aug 25 12:32:56 2022 +0800

[FLINK-27739] Update table store benchmark usage

This closes #274
---
 flink-table-store-benchmark/README.md  |  18 +-
 .../flink/table/store/benchmark/Benchmark.java |  87 +-
 .../table/store/benchmark/BenchmarkOptions.java|   6 -
 .../apache/flink/table/store/benchmark/Query.java  |  96 ++-
 .../flink/table/store/benchmark/QueryRunner.java   |  84 ++
 .../store/benchmark/metric/BenchmarkMetric.java|  50 ++
 .../store/benchmark/metric/FlinkRestClient.java|  83 +++---
 .../store/benchmark/metric/JobBenchmarkMetric.java |  71 +++-
 .../store/benchmark/metric/MetricReporter.java |  65 
 .../store/benchmark/metric/bytes/BpsMetric.java| 165 ---
 .../benchmark/metric/bytes/TotalBytesMetric.java   | 162 --
 .../utils/BenchmarkGlobalConfiguration.java|   9 +-
 .../src/main/resources/conf/benchmark.yaml |   3 +-
 .../src/main/resources/queries/q1.sql  |   4 +-
 .../src/main/resources/queries/q2.sql  | 179 
 .../src/main/resources/queries/q3.sql  |  92 ---
 .../src/main/resources/queries/q4.sql  | 181 -
 .../src/main/resources/queries/queries.yaml|   9 +-
 18 files changed, 304 insertions(+), 1060 deletions(-)

diff --git a/flink-table-store-benchmark/README.md 
b/flink-table-store-benchmark/README.md
index d9dda83c..a9b9416d 100644
--- a/flink-table-store-benchmark/README.md
+++ b/flink-table-store-benchmark/README.md
@@ -9,12 +9,13 @@ This is the benchmark module for Flink Table Store. Inspired 
by [Nexmark](https:
   * Two worker nodes with 16 cores and 64GB RAM.
 * This benchmark runs on a standalone Flink cluster. Download Flink >= 1.15 
from the [Apache Flink's 
website](https://flink.apache.org/downloads.html#apache-flink-1150) and setup a 
standalone cluster. Flink's job manager must be on the master node of your EMR 
cluster. We recommend the following Flink configurations:
 ```yaml
-jobmanager.memory.process.size: 8192m
+jobmanager.memory.process.size: 4096m
 taskmanager.memory.process.size: 4096m
 taskmanager.numberOfTaskSlots: 1
 parallelism.default: 16
 execution.checkpointing.interval: 3min
 state.backend: rocksdb
+state.backend.incremental: true
 ```
 With this Flink configuration, you'll need 16 task manager instances in 
total, 8 on each EMR worker.
 * This benchmark needs the `FLINK_HOME` environment variable. Set `FLINK_HOME` 
to your Flink directory.
@@ -29,33 +30,30 @@ This is the benchmark module for Flink Table Store. 
Inspired by [Nexmark](https:
 * Run `flink-table-store-benchmark/bin/setup_cluster.sh` in master node. This 
activates the CPU metrics collector in worker nodes. Note that if you restart 
your Flink cluster, you must also restart the CPU metrics collectors. To stop 
CPU metrics collectors, run 
`flink-table-store-benchmark/bin/shutdown_cluster.sh` in master node.
 
 ### Run Benchmark
-* Run `flink-table-store-benchmark/bin/run_benchmark.sh  ` to run 
`` for ``. Currently `` can be `q1`, `q2` or `all`, and 
sink can only be `table_store`.
+* Run `flink-table-store-benchmark/bin/run_benchmark.sh  ` to run 
`` for ``. Currently `` can be `q1` or `all`, and sink can 
only be `table_store`.
 * By default, each query writes for 30 minutes and then reads all records back 
from the sink to measure read throughput. 
 
 ## Queries
 
 |#|Description|
 |---|---|
-|q1|Test insert and update random primary keys with small record size (100 
bytes per record).|
-|q2|Test insert and update random primary keys with large record size (1500 
bytes per record).|
-|q3|Test insert and update primary keys related with time with small record 
size (100 bytes per record).|
-|q4|Test insert and update primary keys related with time with large record 
size (1500 bytes per record).|
+|q1|Test insert and update random primary keys with normal record size (100 
bytes per record). Mimics the update of uv and pv of items in an E-commercial 
website.|
 
 ## Benchmark Results
 
 Results of each query consist of the following aspects:
-* Throughput (byte/s): Average number of bytes inserted into the sink per 
second.
-* Total Bytes: Total number of bytes written during the given time.
+* Throughput (rows/s): Average number of rows inserted into the sink per 
second.
+* Total Rows: Total number of rows written.
 * Cores: Average CPU cost.
 * Throughput/Cores: 

[flink] branch master updated (221d70d9930 -> 64f11ee9549)

2022-08-24 Thread shengkai
This is an automated email from the ASF dual-hosted git repository.

shengkai pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 221d70d9930 [FLINK-28815][docs] Translate the Real Time Reporting with 
the Table API page into Chinese
 add 64f11ee9549 [FLINK-28936][sql-gateway] Fix REST endpoint can not 
serialize uncompacted LogicalType (#20617)

No new revisions were added by this update.

Summary of changes:
 .../table/gateway/api/results/ColumnInfo.java  |  39 ++--
 .../flink/table/gateway/api/results/ResultSet.java |   2 +
 .../{ => serde}/JsonResultSetDeserializer.java |  22 +-
 .../{ => serde}/JsonResultSetSerializer.java   |  18 +-
 .../results/serde/LogicalTypeJsonDeserializer.java | 251 +
 .../results/serde/LogicalTypeJsonSerializer.java   | 243 
 .../{ => serde}/JsonResultSetSerDeTest.java|  24 +-
 .../results/serde/LogicalTypeJsonSerDeTest.java| 233 +++
 .../table/gateway/cli/SqlGatewayOptionsParser.java |   2 +-
 .../table/gateway/rest/StatementCaseITTest.java|  78 +++
 .../rest/util/SqlGatewayRestEndpointExtension.java |  89 
 11 files changed, 899 insertions(+), 102 deletions(-)
 rename 
flink-table/flink-sql-gateway-api/src/main/java/org/apache/flink/table/gateway/api/results/{
 => serde}/JsonResultSetDeserializer.java (88%)
 rename 
flink-table/flink-sql-gateway-api/src/main/java/org/apache/flink/table/gateway/api/results/{
 => serde}/JsonResultSetSerializer.java (89%)
 create mode 100644 
flink-table/flink-sql-gateway-api/src/main/java/org/apache/flink/table/gateway/api/results/serde/LogicalTypeJsonDeserializer.java
 create mode 100644 
flink-table/flink-sql-gateway-api/src/main/java/org/apache/flink/table/gateway/api/results/serde/LogicalTypeJsonSerializer.java
 rename 
flink-table/flink-sql-gateway-api/src/test/java/org/apache/flink/table/gateway/api/results/{
 => serde}/JsonResultSetSerDeTest.java (91%)
 create mode 100644 
flink-table/flink-sql-gateway-api/src/test/java/org/apache/flink/table/gateway/api/results/serde/LogicalTypeJsonSerDeTest.java
 create mode 100644 
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/rest/util/SqlGatewayRestEndpointExtension.java



[flink-web] branch asf-site updated: Rebuild website

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 50aacc30d Rebuild website
50aacc30d is described below

commit 50aacc30d0030a11f358117e2288654231bd292e
Author: Danny Cranmer 
AuthorDate: Wed Aug 24 21:29:17 2022 +

Rebuild website
---
 content/blog/feed.xml  | 377 +++
 content/blog/index.html|  36 +-
 content/blog/page10/index.html |  36 +-
 content/blog/page11/index.html |  36 +-
 content/blog/page12/index.html |  36 +-
 content/blog/page13/index.html |  38 +-
 content/blog/page14/index.html |  38 +-
 content/blog/page15/index.html |  38 +-
 content/blog/page16/index.html |  40 +-
 content/blog/page17/index.html |  40 +-
 content/blog/page18/index.html |  40 +-
 content/blog/page19/index.html |  40 +-
 content/blog/page2/index.html  |  36 +-
 content/blog/page20/index.html |  25 ++
 content/blog/page3/index.html  |  36 +-
 content/blog/page4/index.html  |  38 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  38 +-
 content/blog/page7/index.html  |  38 +-
 content/blog/page8/index.html  |  41 ++-
 content/blog/page9/index.html  |  39 +-
 content/index.html |   6 +-
 .../2022/08/25/release-1.15.2.html}| 408 -
 content/zh/index.html  |   6 +-
 24 files changed, 782 insertions(+), 764 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 36529c5f5..d1ff5f8c8 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -6,6 +6,144 @@
 https://flink.apache.org/blog
 https://flink.apache.org/blog/feed.xml; rel="self" 
type="application/rss+xml" />
 
+
+Apache Flink 1.15.2 Release Announcement
+pThe Apache Flink Community is pleased to announce the 
second bug fix release of the Flink 1.15 series./p
+
+pThis release includes 30 bug fixes, vulnerability fixes, and minor 
improvements for Flink 1.15.
+Below you will find a list of all bugfixes and improvements (excluding 
improvements to the build infrastructure and build stability). For a complete 
list of all changes see:
+a 
href=https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522amp;version=12351829JIRA/a./p;
+
+pWe highly recommend all users upgrade to Flink 1.15.2./p
+
+h1 id=release-artifactsRelease Artifacts/h1
+
+h2 id=maven-dependenciesMaven Dependencies/h2
+
+div class=highlightprecode 
class=language-xmlspan 
class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-javaspan
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.15.2span 
class=ntlt;/versiongt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-streaming-javaspan
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.15.2span 
class=ntlt;/versiongt;/span
+span class=ntlt;/dependencygt;/span
+span class=ntlt;dependencygt;/span
+  span 
class=ntlt;groupIdgt;/spanorg.apache.flinkspan
 class=ntlt;/groupIdgt;/span
+  span 
class=ntlt;artifactIdgt;/spanflink-clientsspan
 class=ntlt;/artifactIdgt;/span
+  span 
class=ntlt;versiongt;/span1.15.2span 
class=ntlt;/versiongt;/span
+span 
class=ntlt;/dependencygt;/span/code/pre/div
+
+h2 id=binariesBinaries/h2
+
+pYou can find the binaries on the updated a 
href=/downloads.htmlDownloads page/a./p
+
+h2 id=docker-imagesDocker Images/h2
+
+ul
+  lia 
href=https://hub.docker.com/_/flink?tab=tagsamp;page=1amp;name=1.15.2library/flink/a;
 (official images)/li
+  lia 
href=https://hub.docker.com/r/apache/flink/tags?page=1amp;name=1.15.2apache/flink/a;
 (ASF repository)/li
+/ul
+
+h2 id=pypiPyPi/h2
+
+ul
+  lia 
href=https://pypi.org/project/apache-flink/1.15.2/apache-flink==1.15.2/a/li;
+/ul
+
+h1 id=upgrade-notesUpgrade Notes/h1
+
+pFor Table API: 1.15.0 and 1.15.1 generated non-deterministic UIDs for 
operators that 
+make it difficult/impossible to restore state or upgrade to next patch 
version. A new 
+table.exec.uid.generation config option (with correct default behavior) 
disables setting
+a UID for new pipelines from non-compiled plans. Existing pipelines can set 
+table.exec.uid.generation=ALWAYS if the 1.15.0/1 behavior was 

[flink-web] 02/02: Rebuild website

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit b7cebf670affd57a84c7fe548ec94d8fc29a53a4
Author: Danny Cranmer 
AuthorDate: Wed Aug 24 22:16:55 2022 +0100

Rebuild website
---
 content/downloads.html | 29 -
 content/q/gradle-quickstart.sh |  2 +-
 content/q/quickstart-scala.sh  |  2 +-
 content/q/quickstart.sh|  2 +-
 content/q/sbt-quickstart.sh|  2 +-
 content/zh/downloads.html  | 31 +--
 6 files changed, 45 insertions(+), 23 deletions(-)

diff --git a/content/downloads.html b/content/downloads.html
index ba9dbd680..b7b13dd3b 100644
--- a/content/downloads.html
+++ b/content/downloads.html
@@ -240,7 +240,7 @@
 
 
 
-  Apache 
Flink 1.15.1
+  Apache 
Flink 1.15.2
   Apache 
Flink 1.14.5
   Apache 
Flink 1.13.6
   Apache 
Flink 1.12.7
@@ -274,17 +274,17 @@
 
 
 
-Apache Flink® 1.15.1 is our latest stable release.
+Apache Flink® 1.15.2 is our latest stable release.
 
-Apache Flink 1.15.1
+Apache Flink 1.15.2
 
 
-https://www.apache.org/dyn/closer.lua/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz;
 id="1151-download_212">Apache Flink 1.15.1 for Scala 2.12 (https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz.asc;>asc,
 https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz.sha512;>sha512)
+https://www.apache.org/dyn/closer.lua/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz;
 id="1152-download_212">Apache Flink 1.15.2 for Scala 2.12 (https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz.asc;>asc,
 https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz.sha512;>sha512)
 
 
 
-https://www.apache.org/dyn/closer.lua/flink/flink-1.15.1/flink-1.15.1-src.tgz;
 id="1151-download-source">Apache Flink 1.15.1 Source Release
-(https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-src.tgz.asc;>asc,
 https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-src.tgz.sha512;>sha512)
+https://www.apache.org/dyn/closer.lua/flink/flink-1.15.2/flink-1.15.2-src.tgz;
 id="1152-download-source">Apache Flink 1.15.2 Source Release
+(https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-src.tgz.asc;>asc,
 https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-src.tgz.sha512;>sha512)
 
 
 Release Notes
@@ -510,17 +510,17 @@ main Flink release:
 dependency
   groupIdorg.apache.flink/groupId
   artifactIdflink-java/artifactId
-  version1.15.1/version
+  version1.15.2/version
 /dependency
 dependency
   groupIdorg.apache.flink/groupId
   artifactIdflink-streaming-java_2.11/artifactId
-  version1.15.1/version
+  version1.15.2/version
 /dependency
 dependency
   groupIdorg.apache.flink/groupId
   artifactIdflink-clients_2.11/artifactId
-  version1.15.1/version
+  version1.15.2/version
 /dependency
 
 Apache Flink Stateful Functions
@@ -595,6 +595,17 @@ The statefun-flink-harness dependency 
includes a local execution en
 
 
 
+Flink 1.15.2 - 2022-08-24
+(https://archive.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-src.tgz;>Source,
+https://archive.apache.org/dist/flink/flink-1.15.2;>Binaries,
+https://nightlies.apache.org/flink/flink-docs-release-1.15;>Docs,
+https://nightlies.apache.org/flink/flink-docs-release-1.15/api/java;>Javadocs,
+https://nightlies.apache.org/flink/flink-docs-release-1.15/api/scala/index.html;>Scaladocs)
+
+
+
+
+
 Flink 1.15.1 - 2022-07-06
 (https://archive.apache.org/dist/flink/flink-1.15.1/flink-1.15.1-src.tgz;>Source,
 https://archive.apache.org/dist/flink/flink-1.15.1;>Binaries,
diff --git a/content/q/gradle-quickstart.sh b/content/q/gradle-quickstart.sh
index d98c655e8..2bd860919 100755
--- a/content/q/gradle-quickstart.sh
+++ b/content/q/gradle-quickstart.sh
@@ -41,7 +41,7 @@ function mkPackage() {
 defaultProjectName="quickstart"
 defaultOrganization="org.myorg.quickstart"
 defaultVersion="0.1-SNAPSHOT"
-defaultFlinkVersion="${1:-1.15.1}"
+defaultFlinkVersion="${1:-1.15.2}"
 # flink-docs-master/docs/dev/datastream/project-configuration/#gradle
 # passes the scala version prefixed with a _, e.g.: _2.12
 scalaBinaryVersionFromCmdArg="${2/_/}"
diff --git a/content/q/quickstart-scala.sh b/content/q/quickstart-scala.sh
index aabe9f276..7be5b47b5 100755
--- a/content/q/quickstart-scala.sh
+++ b/content/q/quickstart-scala.sh
@@ -24,7 +24,7 @@ PACKAGE=quickstart
 mvn archetype:generate 
\
   -DarchetypeGroupId=org.apache.flink  \
   -DarchetypeArtifactId=flink-quickstart-scala \
-  -DarchetypeVersion=${1:-1.15.1}  
\
+  -DarchetypeVersion=${1:-1.15.2}  
\
   -DgroupId=org.myorg.quickstart   \
   -DartifactId=$PACKAGE   

[flink-web] 01/02: Release flink 1.15.2

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 725e68203a2611336aa9e95bf22a2b7d3a9446ff
Author: Danny Cranmer 
AuthorDate: Wed Aug 17 09:18:15 2022 +0100

Release flink 1.15.2
---
 _config.yml |  29 +---
 _posts/2022-08-24-release-1.15.2.md | 141 
 q/gradle-quickstart.sh  |   2 +-
 q/quickstart-scala.sh   |   2 +-
 q/quickstart.sh |   2 +-
 q/sbt-quickstart.sh |   2 +-
 6 files changed, 162 insertions(+), 16 deletions(-)

diff --git a/_config.yml b/_config.yml
index aa020..dec87cba0 100644
--- a/_config.yml
+++ b/_config.yml
@@ -9,7 +9,7 @@ url: https://flink.apache.org
 
 DOCS_BASE_URL: https://nightlies.apache.org/flink/
 
-FLINK_VERSION_STABLE: 1.15.1
+FLINK_VERSION_STABLE: 1.15.2
 FLINK_VERSION_STABLE_SHORT: "1.15"
 
 FLINK_ISSUES_URL: https://issues.apache.org/jira/browse/FLINK
@@ -75,18 +75,18 @@ FLINK_TABLE_STORE_GITHUB_REPO_NAME: flink-table-store
 flink_releases:
   - version_short: "1.15"
 binary_release:
-  name: "Apache Flink 1.15.1"
+  name: "Apache Flink 1.15.2"
   scala_212:
-id: "1151-download_212"
-url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz;
-asc_url: 
"https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz.asc;
-sha512_url: 
"https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz.sha512;
+id: "1152-download_212"
+url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz;
+asc_url: 
"https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz.asc;
+sha512_url: 
"https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz.sha512;
 source_release:
-  name: "Apache Flink 1.15.1"
-  id: "1151-download-source"
-  url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.15.1/flink-1.15.1-src.tgz;
-  asc_url: 
"https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-src.tgz.asc;
-  sha512_url: 
"https://downloads.apache.org/flink/flink-1.15.1/flink-1.15.1-src.tgz.sha512;
+  name: "Apache Flink 1.15.2"
+  id: "1152-download-source"
+  url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-1.15.2/flink-1.15.2-src.tgz;
+  asc_url: 
"https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-src.tgz.asc;
+  sha512_url: 
"https://downloads.apache.org/flink/flink-1.15.2/flink-1.15.2-src.tgz.sha512;
 release_notes_url: 
"https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.15;
   -
 version_short: "1.14"
@@ -291,7 +291,12 @@ component_releases:
 
 release_archive:
 flink:
-  - version_short: "1.15"
+  - 
+version_short: "1.15"
+version_long: 1.15.2
+release_date: 2022-08-24
+  - 
+version_short: "1.15"
 version_long: 1.15.1
 release_date: 2022-07-06
   -
diff --git a/_posts/2022-08-24-release-1.15.2.md 
b/_posts/2022-08-24-release-1.15.2.md
new file mode 100644
index 0..62c0a5c47
--- /dev/null
+++ b/_posts/2022-08-24-release-1.15.2.md
@@ -0,0 +1,141 @@
+---
+layout: post
+title:  "Apache Flink 1.15.2 Release Announcement"
+date: 2022-08-24T22:00:00.000Z
+categories: news
+authors:
+- danny:
+  name: "Danny Cranmer"
+
+excerpt: The Apache Flink Community is pleased to announce a bug fix release 
for Flink 1.15.
+
+---
+
+The Apache Flink Community is pleased to announce the second bug fix release 
of the Flink 1.15 series.
+
+This release includes 30 bug fixes, vulnerability fixes, and minor 
improvements for Flink 1.15.
+Below you will find a list of all bugfixes and improvements (excluding 
improvements to the build infrastructure and build stability). For a complete 
list of all changes see:
+[JIRA](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351829).
+
+We highly recommend all users upgrade to Flink 1.15.2.
+
+# Release Artifacts
+
+## Maven Dependencies
+
+```xml
+
+  org.apache.flink
+  flink-java
+  1.15.2
+
+
+  org.apache.flink
+  flink-streaming-java
+  1.15.2
+
+
+  org.apache.flink
+  flink-clients
+  1.15.2
+
+```
+
+## Binaries
+
+You can find the binaries on the updated [Downloads page]({{ site.baseurl 
}}/downloads.html).
+
+## Docker Images
+
+* [library/flink](https://hub.docker.com/_/flink?tab=tags=1=1.15.2) 
(official images)
+* 
[apache/flink](https://hub.docker.com/r/apache/flink/tags?page=1=1.15.2) 
(ASF repository)
+
+## PyPi
+
+* [apache-flink==1.15.2](https://pypi.org/project/apache-flink/1.15.2/)
+
+# Upgrade Notes
+
+For Table API: 1.15.0 and 1.15.1 generated non-deterministic UIDs for 
operators that 
+make it difficult/impossible to restore state or 

[flink-web] branch asf-site updated (19aecd736 -> b7cebf670)

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


from 19aecd736 Rebuild website
 new 725e68203 Release flink 1.15.2
 new b7cebf670 Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _config.yml |  29 +---
 _posts/2022-08-24-release-1.15.2.md | 141 
 content/downloads.html  |  29 +---
 content/q/gradle-quickstart.sh  |   2 +-
 content/q/quickstart-scala.sh   |   2 +-
 content/q/quickstart.sh |   2 +-
 content/q/sbt-quickstart.sh |   2 +-
 content/zh/downloads.html   |  31 +---
 q/gradle-quickstart.sh  |   2 +-
 q/quickstart-scala.sh   |   2 +-
 q/quickstart.sh |   2 +-
 q/sbt-quickstart.sh |   2 +-
 12 files changed, 207 insertions(+), 39 deletions(-)
 create mode 100644 _posts/2022-08-24-release-1.15.2.md



[flink] branch release-1.15 updated: Update japicmp configuration for 1.15.2

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.15 by this push:
 new 62d7cc0ee4b Update japicmp configuration for 1.15.2
62d7cc0ee4b is described below

commit 62d7cc0ee4b722e44a6d04796e0633fd5e79121b
Author: Danny Cranmer 
AuthorDate: Wed Aug 24 22:00:45 2022 +0100

Update japicmp configuration for 1.15.2
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 29cdefc0176..3c77559f71f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -168,7 +168,7 @@ under the License.
For Hadoop 2.7, the minor Hadoop version supported for 
flink-shaded-hadoop-2-uber is 2.7.5
-->

2.7.5
-   1.15.1
+   1.15.2
tools/japicmp-output
2.13.0
3.4.3



[flink-docker] branch master updated: Update Dockerfiles for 1.15.2 release (#128)

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new 678f871  Update Dockerfiles for 1.15.2 release (#128)
678f871 is described below

commit 678f87145c86a0f7962bb05b21176a99c986eb02
Author: Danny Cranmer 
AuthorDate: Wed Aug 24 15:08:09 2022 +0100

Update Dockerfiles for 1.15.2 release (#128)
---
 1.15/scala_2.12-java11-debian/Dockerfile   |  6 +++---
 1.15/scala_2.12-java11-debian/docker-entrypoint.sh | 15 ++-
 1.15/scala_2.12-java11-debian/release.metadata |  2 +-
 1.15/scala_2.12-java8-debian/Dockerfile|  6 +++---
 1.15/scala_2.12-java8-debian/docker-entrypoint.sh  | 15 ++-
 1.15/scala_2.12-java8-debian/release.metadata  |  2 +-
 6 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/1.15/scala_2.12-java11-debian/Dockerfile 
b/1.15/scala_2.12-java11-debian/Dockerfile
index 1ba473e..681bc77 100644
--- a/1.15/scala_2.12-java11-debian/Dockerfile
+++ b/1.15/scala_2.12-java11-debian/Dockerfile
@@ -44,9 +44,9 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz.asc
 \
-GPG_KEY=7D660377995CA7A5AFEBA79A3EE012FEE982F098 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz.asc
 \
+GPG_KEY=0F79F2AFB2351BC29678544591F9C1EC125FD8DB \
 CHECK_GPG=true
 
 # Prepare environment
diff --git a/1.15/scala_2.12-java11-debian/docker-entrypoint.sh 
b/1.15/scala_2.12-java11-debian/docker-entrypoint.sh
index 84fca0c..8b0350e 100755
--- a/1.15/scala_2.12-java11-debian/docker-entrypoint.sh
+++ b/1.15/scala_2.12-java11-debian/docker-entrypoint.sh
@@ -91,7 +91,20 @@ prepare_configuration() {
 
 maybe_enable_jemalloc() {
 if [ "${DISABLE_JEMALLOC:-false}" == "false" ]; then
-export LD_PRELOAD=$LD_PRELOAD:/usr/lib/x86_64-linux-gnu/libjemalloc.so
+JEMALLOC_PATH="/usr/lib/$(uname -m)-linux-gnu/libjemalloc.so"
+JEMALLOC_FALLBACK="/usr/lib/x86_64-linux-gnu/libjemalloc.so"
+if [ -f "$JEMALLOC_PATH" ]; then
+export LD_PRELOAD=$LD_PRELOAD:$JEMALLOC_PATH
+elif [ -f "$JEMALLOC_FALLBACK" ]; then
+export LD_PRELOAD=$LD_PRELOAD:$JEMALLOC_FALLBACK
+else
+if [ "$JEMALLOC_PATH" = "$JEMALLOC_FALLBACK" ]; then
+MSG_PATH=$JEMALLOC_PATH
+else
+MSG_PATH="$JEMALLOC_PATH and $JEMALLOC_FALLBACK"
+fi
+echo "WARNING: attempted to load jemalloc from $MSG_PATH but the 
library couldn't be found. glibc will be used instead."
+fi
 fi
 }
 
diff --git a/1.15/scala_2.12-java11-debian/release.metadata 
b/1.15/scala_2.12-java11-debian/release.metadata
index b95ca8a..756ff61 100644
--- a/1.15/scala_2.12-java11-debian/release.metadata
+++ b/1.15/scala_2.12-java11-debian/release.metadata
@@ -1,2 +1,2 @@
-Tags: 1.15.1-scala_2.12-java11, 1.15-scala_2.12-java11, scala_2.12-java11, 
1.15.1-scala_2.12, 1.15-scala_2.12, scala_2.12, 1.15.1-java11, 1.15-java11, 
java11, 1.15.1, 1.15, latest
+Tags: 1.15.2-scala_2.12-java11, 1.15-scala_2.12-java11, scala_2.12-java11, 
1.15.2-scala_2.12, 1.15-scala_2.12, scala_2.12, 1.15.2-java11, 1.15-java11, 
java11, 1.15.2, 1.15, latest
 Architectures: amd64,arm64v8
diff --git a/1.15/scala_2.12-java8-debian/Dockerfile 
b/1.15/scala_2.12-java8-debian/Dockerfile
index 5fcb32c..8bc18cb 100644
--- a/1.15/scala_2.12-java8-debian/Dockerfile
+++ b/1.15/scala_2.12-java8-debian/Dockerfile
@@ -44,9 +44,9 @@ RUN set -ex; \
   gosu nobody true
 
 # Configure Flink version
-ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz
 \
-
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.15.1/flink-1.15.1-bin-scala_2.12.tgz.asc
 \
-GPG_KEY=7D660377995CA7A5AFEBA79A3EE012FEE982F098 \
+ENV 
FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download=flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz
 \
+
FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.15.2/flink-1.15.2-bin-scala_2.12.tgz.asc
 \
+GPG_KEY=0F79F2AFB2351BC29678544591F9C1EC125FD8DB \
 CHECK_GPG=true
 
 # Prepare environment
diff --git a/1.15/scala_2.12-java8-debian/docker-entrypoint.sh 
b/1.15/scala_2.12-java8-debian/docker-entrypoint.sh
index 84fca0c..8b0350e 100755
--- a/1.15/scala_2.12-java8-debian/docker-entrypoint.sh
+++ b/1.15/scala_2.12-java8-debian/docker-entrypoint.sh
@@ -91,7 +91,20 @@ prepare_configuration() {
 
 

[flink-docker] branch dev-1.15 updated: Add GPG key for 1.15.2 release (#127)

2022-08-24 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch dev-1.15
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.15 by this push:
 new 43d0c32  Add GPG key for 1.15.2 release (#127)
43d0c32 is described below

commit 43d0c328e6c6125f6ec226550e095820fdb6e2ce
Author: Danny Cranmer 
AuthorDate: Wed Aug 24 15:07:47 2022 +0100

Add GPG key for 1.15.2 release (#127)
---
 add-version.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/add-version.sh b/add-version.sh
index f6cc802..8c8bd37 100755
--- a/add-version.sh
+++ b/add-version.sh
@@ -98,6 +98,8 @@ elif [ "$flink_version" = "1.15.0" ]; then
 gpg_key="CBE82BEFD827B08AFA843977EDBF922A7BC84897"
 elif [ "$flink_version" = "1.15.1" ]; then
 gpg_key="7D660377995CA7A5AFEBA79A3EE012FEE982F098"
+elif [ "$flink_version" = "1.15.2" ]; then
+gpg_key="0F79F2AFB2351BC29678544591F9C1EC125FD8DB"
 else
 error "Missing GPG key ID for this release"
 fi



[flink-ml] branch master updated: [FLINK-29044] Add Transformer for DCT

2022-08-24 Thread zhangzp
This is an automated email from the ASF dual-hosted git repository.

zhangzp pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-ml.git


The following commit(s) were added to refs/heads/master by this push:
 new 6825291  [FLINK-29044] Add Transformer for DCT
6825291 is described below

commit 6825291e0a808474c89a0bf65ed43f0cf1034e79
Author: yunfengzhou-hub 
AuthorDate: Wed Aug 24 21:15:51 2022 +0800

[FLINK-29044] Add Transformer for DCT

This closes #145.
---
 .../flink/ml/examples/feature/DCTExample.java  |  61 +++
 flink-ml-lib/pom.xml   |  27 +++
 .../java/org/apache/flink/ml/feature/dct/DCT.java  | 139 +++
 .../org/apache/flink/ml/feature/dct/DCTParams.java |  45 +
 .../java/org/apache/flink/ml/feature/DCTTest.java  | 187 +
 .../pyflink/examples/ml/feature/dct_example.py |  61 +++
 flink-ml-python/pyflink/ml/lib/feature/dct.py  |  74 
 .../pyflink/ml/lib/feature/tests/test_dct.py   |  89 ++
 flink-ml-uber/src/main/resources/META-INF/NOTICE   |   6 +
 .../META-INF/licenses/LICENSE.JLargeArrays |  23 +++
 .../META-INF/licenses/LICENSE.JTransforms  |  23 +++
 11 files changed, 735 insertions(+)

diff --git 
a/flink-ml-examples/src/main/java/org/apache/flink/ml/examples/feature/DCTExample.java
 
b/flink-ml-examples/src/main/java/org/apache/flink/ml/examples/feature/DCTExample.java
new file mode 100644
index 000..2b3d683
--- /dev/null
+++ 
b/flink-ml-examples/src/main/java/org/apache/flink/ml/examples/feature/DCTExample.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.examples.feature;
+
+import org.apache.flink.ml.feature.dct.DCT;
+import org.apache.flink.ml.linalg.Vector;
+import org.apache.flink.ml.linalg.Vectors;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.CloseableIterator;
+
+import java.util.Arrays;
+import java.util.List;
+
+/** Simple program that creates a DCT instance and uses it for feature 
engineering. */
+public class DCTExample {
+public static void main(String[] args) {
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
+
+// Generates input data.
+List inputData =
+Arrays.asList(
+Vectors.dense(1.0, 1.0, 1.0, 1.0), Vectors.dense(1.0, 
0.0, -1.0, 0.0));
+Table inputTable = 
tEnv.fromDataStream(env.fromCollection(inputData)).as("input");
+
+// Creates a DCT object and initializes its parameters.
+DCT dct = new DCT();
+
+// Uses the DCT object for feature transformations.
+Table outputTable = dct.transform(inputTable)[0];
+
+// Extracts and displays the results.
+for (CloseableIterator it = outputTable.execute().collect(); 
it.hasNext(); ) {
+Row row = it.next();
+
+Vector inputValue = row.getFieldAs(dct.getInputCol());
+Vector outputValue = row.getFieldAs(dct.getOutputCol());
+
+System.out.printf("Input Value: %s\tOutput Value: %s\n", 
inputValue, outputValue);
+}
+}
+}
diff --git a/flink-ml-lib/pom.xml b/flink-ml-lib/pom.xml
index 9a0fc0b..42eff45 100644
--- a/flink-ml-lib/pom.xml
+++ b/flink-ml-lib/pom.xml
@@ -58,6 +58,12 @@ under the License.
   provided
 
 
+
+  com.github.wendykierp
+  JTransforms
+  3.1
+
+
 
   org.apache.flink
   flink-table-planner-loader
@@ -101,6 +107,27 @@ under the License.
 
   
 
+  
+org.apache.maven.plugins
+maven-shade-plugin
+
+  
+shade-flink
+package
+
+  shade
+
+
+  
+
+  com.github.wendykierp:JTransforms
+  pl.edu.icm:JLargeArrays
+
+  

[flink-table-store] branch master updated: [hotfix] Update architecture png

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 454d44fe [hotfix] Update architecture png
454d44fe is described below

commit 454d44fe8cadd202e8d3a18587d1ab890786dafd
Author: JingsongLi 
AuthorDate: Wed Aug 24 16:20:47 2022 +0800

[hotfix] Update architecture png
---
 docs/static/img/architecture.png | Bin 145608 -> 261342 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git a/docs/static/img/architecture.png b/docs/static/img/architecture.png
index 749af219..a102dd51 100644
Binary files a/docs/static/img/architecture.png and 
b/docs/static/img/architecture.png differ



[flink-table-store] branch release-0.2 updated: [hotfix] Update architecture png

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new 767b53ea [hotfix] Update architecture png
767b53ea is described below

commit 767b53ea2b5dfd93ba18bc6c42132da5546211e8
Author: JingsongLi 
AuthorDate: Wed Aug 24 16:20:47 2022 +0800

[hotfix] Update architecture png
---
 docs/static/img/architecture.png | Bin 145608 -> 261342 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git a/docs/static/img/architecture.png b/docs/static/img/architecture.png
index 749af219..a102dd51 100644
Binary files a/docs/static/img/architecture.png and 
b/docs/static/img/architecture.png differ



[flink] branch master updated: [FLINK-28815][docs] Translate the Real Time Reporting with the Table API page into Chinese

2022-08-24 Thread hxb
This is an automated email from the ASF dual-hosted git repository.

hxb pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 221d70d9930 [FLINK-28815][docs] Translate the Real Time Reporting with 
the Table API page into Chinese
221d70d9930 is described below

commit 221d70d9930f72147422ea24b399f006ebbfb8d7
Author: zhenyu xing 
AuthorDate: Tue Aug 9 15:40:59 2022 +0800

[FLINK-28815][docs] Translate the Real Time Reporting with the Table API 
page into Chinese

This closes #20510.
---
 docs/content.zh/docs/try-flink/datastream.md |   6 +-
 docs/content.zh/docs/try-flink/table_api.md  | 161 +--
 2 files changed, 81 insertions(+), 86 deletions(-)

diff --git a/docs/content.zh/docs/try-flink/datastream.md 
b/docs/content.zh/docs/try-flink/datastream.md
index af84f58e8e0..387d88a46a8 100644
--- a/docs/content.zh/docs/try-flink/datastream.md
+++ b/docs/content.zh/docs/try-flink/datastream.md
@@ -111,9 +111,9 @@ $ mvn archetype:generate \
 
 {{< unstable >}}
 {{< hint warning >}}
-Maven 3.0 及更高版本,不再支持通过命令行指定仓库(-DarchetypeCatalog)。有关这个改动的详细信息,
-请参阅 [Maven 
官方文档](http://maven.apache.org/archetype/maven-archetype-plugin/archetype-repository.html)
-如果你希望使用快照仓库,则需要在 settings.xml 文件中添加一个仓库条目。例如:
+Maven 3.0 及更高版本,不再支持通过命令行指定仓库(-DarchetypeCatalog)。有关这个改动的详细信息,
+请参阅 [Maven 
官方文档](http://maven.apache.org/archetype/maven-archetype-plugin/archetype-repository.html)
+如果你希望使用快照仓库,则需要在 settings.xml 文件中添加一个仓库条目。例如:
 ```xml
 
   
diff --git a/docs/content.zh/docs/try-flink/table_api.md 
b/docs/content.zh/docs/try-flink/table_api.md
index 5153cb468d5..9456de8c4db 100644
--- a/docs/content.zh/docs/try-flink/table_api.md
+++ b/docs/content.zh/docs/try-flink/table_api.md
@@ -28,34 +28,34 @@ under the License.
 
 # 基于 Table API 实现实时报表
 
-Apache Flink offers a Table API as a unified, relational API for batch and 
stream processing, i.e., queries are executed with the same semantics on 
unbounded, real-time streams or bounded, batch data sets and produce the same 
results.
-The Table API in Flink is commonly used to ease the definition of data 
analytics, data pipelining, and ETL applications.
+Apache Flink 提供了 Table API 作为批流统一的关系型 
API。也就是说,在无界的实时流数据或者有界的批数据集上进行查询具有相同的语义,得到的结果一致。
+Flink 的 Table API 可以简化数据分析、构建数据流水线以及 ETL 应用的定义。
 
-## What Will You Be Building? 
+## 你接下来要搭建的是什么系统?
 
-In this tutorial, you will learn how to build a real-time dashboard to track 
financial transactions by account.
-The pipeline will read data from Kafka and write the results to MySQL 
visualized via Grafana.
+在本教程中,你将学习构建一个通过账户来追踪金融交易的实时看板。
+数据流水线为:先从 Kafka 中读取数据,再将结果写入到 MySQL 中,最后通过 Grafana 展示。
 
-## Prerequisites
+## 准备条件
 
-This walkthrough assumes that you have some familiarity with Java or Scala, 
but you should be able to follow along even if you come from a different 
programming language.
-It also assumes that you are familiar with basic relational concepts such as 
`SELECT` and `GROUP BY` clauses.
+我们默认你对 Java 或者 Scala 有一定了解,当然如果你使用的是其他编程语言,也可以继续学习。 
+同时也默认你了解基本的关系型概念,例如 SELECT 、GROUP BY 等语句。
 
-## Help, I’m Stuck! 
+## 困难求助
 
-If you get stuck, check out the [community support 
resources](https://flink.apache.org/community.html).
-In particular, Apache Flink's [user mailing 
list](https://flink.apache.org/community.html#mailing-lists) consistently ranks 
as one of the most active of any Apache project and a great way to get help 
quickly. 
+如果遇到问题,可以参考 [社区支持资源](https://flink.apache.org/community.html)。
+Flink 的 [用户邮件列表](https://flink.apache.org/community.html#mailing-lists) 是 
Apahe 项目中最活跃的一个,这也是快速寻求帮助的重要途径。
 
 {{< hint info >}}
-If running docker on Windows and your data generator container is failing to 
start, then please ensure that you're using the right shell.
-For example **docker-entrypoint.sh** for 
**table-walkthrough_data-generator_1** container requires bash.
-If unavailable, it will throw an error **standard_init_linux.go:211: exec user 
process caused "no such file or directory"**.
-A workaround is to switch the shell to **sh** on the first line of 
**docker-entrypoint.sh**.
+在 Windows 环境下,如果用来生成数据的 docker 容器启动失败,请检查使用的脚本是否正确。
+例如 **docker-entrypoint.sh** 是容器 **table-walkthrough_data-generator_1** 所需的 
bash 脚本。
+如果不可用,会报 **standard_init_linux.go:211: exec user process caused "no such file 
or directory"** 的错误。
+一种解决办法是在 **docker-entrypoint.sh** 的第一行将脚本执行器切换到 **sh**
 {{< /hint >}}
 
-## How To Follow Along
+## 如何跟着教程练习
 
-If you want to follow along, you will require a computer with: 
+本教程依赖如下运行环境: 
 
 * Java 11
 * Maven 
@@ -63,16 +63,14 @@ If you want to follow along, you will require a computer 
with:
 
 {{< unstable >}}
 {{< hint warning >}}
-**Attention:** The Apache Flink Docker images used for this playground are 
only available for released versions of Apache Flink.
+**注意:** 本文中使用的 Apache Flink Docker 镜像仅适用于 Apache Flink 发行版。
 
-Since you are currently 

[flink-table-store] branch master updated: [hotfix] Update Flink

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 048aab8d [hotfix] Update Flink
048aab8d is described below

commit 048aab8d92a441bfe63fa2707b7907db8b86aa87
Author: Jingsong Lee 
AuthorDate: Wed Aug 24 15:51:48 2022 +0800

[hotfix] Update Flink
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 2e2698ad..538b6ce9 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
 # Flink Table Store
 
-Flink Table Store is a unified streaming and batch store for building dynamic 
tables on Apache Flink.
+Flink Table Store is a data lake storage for streaming updates/deletes 
changelog ingestion and high-performance queries in real time.
 
 Flink Table Store is developed under the umbrella of [Apache 
Flink](https://flink.apache.org/).
 



[flink] branch master updated: [FLINK-28265][k8s] Make KubernetesStateHandleStore#addEntry idempotent

2022-08-24 Thread wangyang0918
This is an automated email from the ASF dual-hosted git repository.

wangyang0918 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new aae96d0c9d1 [FLINK-28265][k8s] Make 
KubernetesStateHandleStore#addEntry idempotent
aae96d0c9d1 is described below

commit aae96d0c9d1768c396bdf2ee6510677fbb8f317a
Author: wangyang0918 
AuthorDate: Mon Aug 15 23:03:19 2022 +0800

[FLINK-28265][k8s] Make KubernetesStateHandleStore#addEntry idempotent

This closes #20590.
---
 .../KubernetesStateHandleStore.java| 16 +++--
 .../KubernetesStateHandleStoreTest.java| 75 ++
 2 files changed, 87 insertions(+), 4 deletions(-)

diff --git 
a/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStore.java
 
b/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStore.java
index 0716b58ec34..42f07003207 100644
--- 
a/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStore.java
+++ 
b/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStore.java
@@ -648,10 +648,12 @@ public class KubernetesStateHandleStore
 private Optional addEntry(
 KubernetesConfigMap configMap, String key, byte[] 
serializedStateHandle)
 throws Exception {
-final String content = configMap.getData().get(key);
-if (content != null) {
+final String oldBase64Content = configMap.getData().get(key);
+final String newBase64Content = toBase64(serializedStateHandle);
+if (oldBase64Content != null) {
 try {
-final StateHandleWithDeleteMarker stateHandle = 
deserializeStateHandle(content);
+final StateHandleWithDeleteMarker stateHandle =
+deserializeStateHandle(oldBase64Content);
 if (stateHandle.isMarkedForDeletion()) {
 // This might be a left-over after the fail-over. As the 
remove operation is
 // idempotent let's try to finish it.
@@ -660,6 +662,12 @@ public class KubernetesStateHandleStore
 "Unable to remove the marked as deleting 
entry.");
 }
 } else {
+// It could happen that the kubernetes client retries a 
transaction that has
+// already succeeded due to network issues. So we simply 
ignore when the
+// new content is same as the existing one.
+if (oldBase64Content.equals(newBase64Content)) {
+return Optional.of(configMap);
+}
 throw getKeyAlreadyExistException(key);
 }
 } catch (IOException e) {
@@ -668,7 +676,7 @@ public class KubernetesStateHandleStore
 logInvalidEntry(key, configMapName, e);
 }
 }
-configMap.getData().put(key, toBase64(serializedStateHandle));
+configMap.getData().put(key, newBase64Content);
 return Optional.of(configMap);
 }
 
diff --git 
a/flink-kubernetes/src/test/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStoreTest.java
 
b/flink-kubernetes/src/test/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStoreTest.java
index e246dda45a9..2d58f93a8fe 100644
--- 
a/flink-kubernetes/src/test/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStoreTest.java
+++ 
b/flink-kubernetes/src/test/java/org/apache/flink/kubernetes/highavailability/KubernetesStateHandleStoreTest.java
@@ -19,6 +19,7 @@
 package org.apache.flink.kubernetes.highavailability;
 
 import org.apache.flink.api.common.JobID;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
 import 
org.apache.flink.kubernetes.highavailability.KubernetesStateHandleStore.StateHandleWithDeleteMarker;
 import org.apache.flink.kubernetes.kubeclient.FlinkKubeClient;
 import org.apache.flink.kubernetes.kubeclient.resources.KubernetesConfigMap;
@@ -27,10 +28,13 @@ import 
org.apache.flink.runtime.persistence.PossibleInconsistentStateException;
 import org.apache.flink.runtime.persistence.StateHandleStore;
 import org.apache.flink.runtime.persistence.StringResourceVersion;
 import org.apache.flink.runtime.persistence.TestingLongStateHandleHelper;
+import org.apache.flink.util.ExceptionUtils;
 import org.apache.flink.util.FlinkRuntimeException;
+import org.apache.flink.util.concurrent.Executors;
 import org.apache.flink.util.concurrent.FutureUtils;
 import org.apache.flink.util.function.FunctionUtils;
 
+import io.fabric8.kubernetes.client.KubernetesClientException;
 import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Test;
 
@@ -39,6 +43,10 @@ import 

[flink-kubernetes-operator] branch main updated: [docs] Add an example about how to enable Prometheus for FlinkDeployment

2022-08-24 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new cecde2c4 [docs] Add an example about how to enable Prometheus for 
FlinkDeployment
cecde2c4 is described below

commit cecde2c4254964e843889f5a025f8105e00507bc
Author: Xin Hao 
AuthorDate: Wed Aug 24 15:40:58 2022 +0800

[docs] Add an example about how to enable Prometheus for FlinkDeployment
---
 docs/content/docs/operations/metrics-logging.md | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/docs/content/docs/operations/metrics-logging.md 
b/docs/content/docs/operations/metrics-logging.md
index 89cfcab0..d9dabeed 100644
--- a/docs/content/docs/operations/metrics-logging.md
+++ b/docs/content/docs/operations/metrics-logging.md
@@ -113,7 +113,7 @@ defaultConfiguration:
 kubernetes.operator.metrics.reporter.prom.class: 
org.apache.flink.metrics.prometheus.PrometheusReporter
 kubernetes.operator.metrics.reporter.prom.port: 
 ```
-Some metric reporters, including the Prometheus, needs a port to be exposed on 
the container. This can be achieved be defining a value for the otherwise empty 
`metrics.port` variable.
+Some metric reporters, including the Prometheus, need a port to be exposed on 
the container. This can be achieved be defining a value for the otherwise empty 
`metrics.port` variable.
 Either in the `values.yaml` file:
 ```yaml
 metrics:
@@ -194,3 +194,15 @@ spec:
   rootLogger.appenderRef.file.ref = LogFile
   ...
 ```
+
+### FlinkDeployment Prometheus Configuration
+
+The following example shows how to enable the Prometheus metric reporter for 
the FlinkDeployment:
+
+```yaml
+spec:
+  ...
+  flinkConfiguration:
+metrics.reporter.prom.class: 
org.apache.flink.metrics.prometheus.PrometheusReporter
+metrics.reporter.prom.port: 9249-9250
+```



[flink-table-store] annotated tag release-0.2.0-rc3 updated (6970f8f8 -> 9051587e)

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a change to annotated tag release-0.2.0-rc3
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


*** WARNING: tag release-0.2.0-rc3 was modified! ***

from 6970f8f8 (commit)
  to 9051587e (tag)
 tagging 6970f8f8d1e21aa67a81d44de11e47ca6a4f7a8c (commit)
 replaces release-0.2.0-rc2
  by JingsongLi
  on Wed Aug 24 15:26:35 2022 +0800

- Log -
release-0.2.0-rc3
-BEGIN PGP SIGNATURE-

iHUEABYKAB0WIQQsK2plOwcIa2XkNp98diReCjGBUAUCYwXSqwAKCRB8diReCjGB
UKXeAP97UHKyWOgDGMThGtScO0kdcPZm3r3jsQD3btpPquqTrgEAqGAFTITOp7hu
/Cdov1sEAtK+bMBiQ7JzwTEx5nhxggY=
=Pnin
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:



[flink] branch master updated (cb507651368 -> 4409d96514b)

2022-08-24 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from cb507651368 [FLINK-28735][scripts] Deprecate jobmanager.sh host/port 
parameters
 add 90d8de66b29 [FLINK-29030][core] Add constant for generic type doc 
reference
 add bbf74f2e8ad [FLINK-29030][core] Note that generic types can affect 
schema evolution
 add 01fb742d094 [FLINK-29030][core] Log a message if any tuple/pojo field 
is handle as generic type
 add 4409d96514b [FLINK-29016][docs] Clarify Kryo limitations w.r.t. 
data-structures

No new revisions were added by this update.

Summary of changes:
 .../serialization/schema_evolution.md  | 27 ++
 .../serialization/schema_evolution.md  |  6 +++
 .../flink/api/java/typeutils/TypeExtractor.java| 43 --
 3 files changed, 57 insertions(+), 19 deletions(-)



svn commit: r56476 - /dev/flink/flink-table-store-0.2.0-rc3/

2022-08-24 Thread lzljs3620320
Author: lzljs3620320
Date: Wed Aug 24 07:39:19 2022
New Revision: 56476

Log:
Apache Flink Table Store, version 0.2.0, release candidate 3

Added:
dev/flink/flink-table-store-0.2.0-rc3/
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz   
(with props)
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.asc
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.sha512
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0.jar   
(with props)
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0.jar.sha512
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0_1.14.jar 
  (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0_1.14.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0_1.14.jar.sha512

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.1.jar
   (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.1.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.1.jar.sha512

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.2.jar
   (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.2.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.2.jar.sha512

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.3.jar
   (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.3.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-catalog-0.2.0_2.3.jar.sha512

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.1.jar
   (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.1.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.1.jar.sha512

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.2.jar
   (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.2.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.2.jar.sha512

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.3.jar
   (with props)

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.3.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-hive-connector-0.2.0_2.3.jar.sha512
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-spark-0.2.0.jar   
(with props)
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-spark-0.2.0.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-spark-0.2.0.jar.sha512
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-spark2-0.2.0.jar   
(with props)
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-spark2-0.2.0.jar.asc

dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-spark2-0.2.0.jar.sha512

Added: dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz
==
Binary file - no diff available.

Propchange: 
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz
--
svn:mime-type = application/octet-stream

Added: dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.asc
==
--- dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.asc 
(added)
+++ dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.asc 
Wed Aug 24 07:39:19 2022
@@ -0,0 +1,7 @@
+-BEGIN PGP SIGNATURE-
+
+iHUEABYKAB0WIQQsK2plOwcIa2XkNp98diReCjGBUAUCYwXSvQAKCRB8diReCjGB
+UHjAAQCVJBa78NvzwLLnNsaYZHQammFYVFlWFBEj6eLTx9BcFAD/QyYppklwFba7
+MYln5NhVbkabjDK0S7HftkGdFshdJws=
+=U6P+
+-END PGP SIGNATURE-

Added: 
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.sha512
==
--- 
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.sha512 
(added)
+++ 
dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-0.2.0-src.tgz.sha512 
Wed Aug 24 07:39:19 2022
@@ -0,0 +1 @@
+213e2346e13126d7a2e03f3c430b1c1fec345baa966d5ea24a42ca55e33d751c604c5cd0c7c76c45477ed5a9c9660350b5eb04a2fb4fd22fa0ea8c19d9093f72
  flink-table-store-0.2.0-src.tgz

Added: dev/flink/flink-table-store-0.2.0-rc3/flink-table-store-dist-0.2.0.jar
==
Binary file - no diff available.

Propchange: 

[flink] branch master updated (254b276c79a -> cb507651368)

2022-08-24 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 254b276c79a [FLINK-29041][tests] Add utility to test POJO compliance 
without any Kryo usage
 add cb507651368 [FLINK-28735][scripts] Deprecate jobmanager.sh host/port 
parameters

No new revisions were added by this update.

Summary of changes:
 .../docs/deployment/resource-providers/standalone/overview.md   | 2 +-
 flink-dist/src/main/flink-bin/bin/jobmanager.sh | 6 +++---
 .../java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java | 6 ++
 .../apache/flink/runtime/entrypoint/parser/CommandLineOptions.java  | 4 
 4 files changed, 14 insertions(+), 4 deletions(-)



[flink-kubernetes-operator] branch main updated: [FLINK-28554] Add subPaths for conf files to allow readOnlyRootFilesystem operation

2022-08-24 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new def69e4e [FLINK-28554] Add subPaths for conf files to allow 
readOnlyRootFilesystem operation
def69e4e is described below

commit def69e4e11bd95ee6cff813deb962afa70282e11
Author: Tim 
AuthorDate: Wed Aug 24 09:35:36 2022 +0200

[FLINK-28554] Add subPaths for conf files to allow readOnlyRootFilesystem 
operation
---
 .../templates/flink-operator.yaml  | 18 --
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/helm/flink-kubernetes-operator/templates/flink-operator.yaml 
b/helm/flink-kubernetes-operator/templates/flink-operator.yaml
index ff8b8c9b..2f4fc51c 100644
--- a/helm/flink-kubernetes-operator/templates/flink-operator.yaml
+++ b/helm/flink-kubernetes-operator/templates/flink-operator.yaml
@@ -91,7 +91,14 @@ spec:
 {{- toYaml .Values.operatorSecurityContext | nindent 12 }}
   volumeMounts:
 - name: flink-operator-config-volume
-  mountPath: /opt/flink/conf
+  mountPath: /opt/flink/conf/flink-conf.yaml
+  subPath: flink-conf.yaml
+- name: flink-operator-config-volume
+  mountPath: /opt/flink/conf/log4j-operator.properties
+  subPath: log4j-operator.properties
+- name: flink-operator-config-volume
+  mountPath: /opt/flink/conf/log4j-console.properties
+  subPath: log4j-console.properties
 {{- if .Values.operatorVolumeMounts.create }}
 {{- toYaml .Values.operatorVolumeMounts.data | nindent 12 }}
 {{- end }}
@@ -149,7 +156,14 @@ spec:
 mountPath: "/certs"
 readOnly: true
   - name: flink-operator-config-volume
-mountPath: /opt/flink/conf
+mountPath: /opt/flink/conf/flink-conf.yaml
+subPath: flink-conf.yaml
+  - name: flink-operator-config-volume
+mountPath: /opt/flink/conf/log4j-operator.properties
+subPath: log4j-operator.properties
+  - name: flink-operator-config-volume
+mountPath: /opt/flink/conf/log4j-console.properties
+subPath: log4j-console.properties
 {{- end }}
   volumes:
 - name: flink-operator-config-volume



[flink] branch master updated: [FLINK-29041][tests] Add utility to test POJO compliance without any Kryo usage

2022-08-24 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 254b276c79a [FLINK-29041][tests] Add utility to test POJO compliance 
without any Kryo usage
254b276c79a is described below

commit 254b276c79a5adce21269fae6722e3bf3ac15b78
Author: Chesnay Schepler 
AuthorDate: Fri Aug 19 10:27:13 2022 +0200

[FLINK-29041][tests] Add utility to test POJO compliance without any Kryo 
usage
---
 .../serialization/types_serialization.md   |  1 +
 .../serialization/types_serialization.md   |  1 +
 .../java/org/apache/flink/types/PojoTestUtils.java | 36 ++
 .../org/apache/flink/types/PojoTestUtilsTest.java  | 25 +++
 4 files changed, 63 insertions(+)

diff --git 
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
 
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
index 072e7ebaf97..4d4f1456c2b 100644
--- 
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
+++ 
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
@@ -116,6 +116,7 @@ You can also register your own custom serializer if 
required; see [Serialization
 Flink analyzes the structure of POJO types, i.e., it learns about the fields 
of a POJO. As a result POJO types are easier to use than general types. 
Moreover, Flink can process POJOs more efficiently than general types.
 
 You can test whether your class adheres to the POJO requirements via 
`org.apache.flink.types.PojoTestUtils#assertSerializedAsPojo()` from the 
`flink-test-utils`.
+If you additionally want to ensure that no field of the POJO will be 
serialized with Kryo, use `assertSerializedAsPojoWithoutKryo()` instead.
 
 The following example shows a simple POJO with two public fields.
 
diff --git 
a/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
 
b/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
index d8563c53ac3..34d21b5e211 100644
--- 
a/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
+++ 
b/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
@@ -117,6 +117,7 @@ You can also register your own custom serializer if 
required; see [Serialization
 Flink analyzes the structure of POJO types, i.e., it learns about the fields 
of a POJO. As a result POJO types are easier to use than general types. 
Moreover, Flink can process POJOs more efficiently than general types.
 
 You can test whether your class adheres to the POJO requirements via 
`org.apache.flink.types.PojoTestUtils#assertSerializedAsPojo()` from the 
`flink-test-utils`.
+If you additionally want to ensure that no field of the POJO will be 
serialized with Kryo, use `assertSerializedAsPojoWithoutKryo()` instead.
 
 The following example shows a simple POJO with two public fields.
 
diff --git 
a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/types/PojoTestUtils.java
 
b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/types/PojoTestUtils.java
index dbae4ed3a22..2cb0830311f 100644
--- 
a/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/types/PojoTestUtils.java
+++ 
b/flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/types/PojoTestUtils.java
@@ -34,6 +34,9 @@ public class PojoTestUtils {
  * {@link PojoSerializer}, as documented https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/fault-tolerance/serialization/types_serialization/#pojos;>here.
  *
+ * Note that this check will succeed even if the Pojo is partially 
serialized with Kryo. If
+ * this is not desired, use {@link 
#assertSerializedAsPojoWithoutKryo(Class)} instead.
+ *
  * @param clazz class to analyze
  * @param  class type
  * @throws AssertionError if instances of the class cannot be serialized 
as a POJO
@@ -52,4 +55,37 @@ public class PojoTestUtils {
 TypeExtractor.class.getCanonicalName())
 .isInstanceOf(PojoSerializer.class);
 }
+
+/**
+ * Verifies that instances of the given class fulfill all conditions to be 
serialized with the
+ * {@link PojoSerializer}, as documented https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/fault-tolerance/serialization/types_serialization/#pojos;>here,
+ * without any field being serialized with Kryo.
+ *
+ * @param clazz class to analyze
+ * @param  class type
+ * @throws AssertionError if instances of the class cannot be serialized 
as a POJO or required
+ * Kryo for one or more fields
+ */
+public static  void 

[flink-table-store] branch master updated: [hotfix] Remove hive catalog in spark3

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 100a5050 [hotfix] Remove hive catalog in spark3
100a5050 is described below

commit 100a50502e68b15d4eaaa9465279efad4026360c
Author: JingsongLi 
AuthorDate: Wed Aug 24 15:25:24 2022 +0800

[hotfix] Remove hive catalog in spark3
---
 flink-table-store-spark/pom.xml | 13 -
 1 file changed, 13 deletions(-)

diff --git a/flink-table-store-spark/pom.xml b/flink-table-store-spark/pom.xml
index 61877397..06cf79d3 100644
--- a/flink-table-store-spark/pom.xml
+++ b/flink-table-store-spark/pom.xml
@@ -45,18 +45,6 @@ under the License.
 ${project.version}
 
 
-
-org.apache.flink
-flink-table-store-hive-catalog
-${project.version}
-
-
-*
-*
-
-
-
-
 
 org.apache.spark
 spark-sql_2.12
@@ -137,7 +125,6 @@ under the License.
 
 
 
org.apache.flink:flink-table-store-shade
-
org.apache.flink:flink-table-store-hive-catalog
 
 
 



[flink-table-store] branch release-0.2 updated: [hotfix] Remove hive catalog in spark3

2022-08-24 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new 6970f8f8 [hotfix] Remove hive catalog in spark3
6970f8f8 is described below

commit 6970f8f8d1e21aa67a81d44de11e47ca6a4f7a8c
Author: JingsongLi 
AuthorDate: Wed Aug 24 15:25:24 2022 +0800

[hotfix] Remove hive catalog in spark3
---
 flink-table-store-spark/pom.xml | 13 -
 1 file changed, 13 deletions(-)

diff --git a/flink-table-store-spark/pom.xml b/flink-table-store-spark/pom.xml
index 2c357887..0461e22b 100644
--- a/flink-table-store-spark/pom.xml
+++ b/flink-table-store-spark/pom.xml
@@ -45,18 +45,6 @@ under the License.
 ${project.version}
 
 
-
-org.apache.flink
-flink-table-store-hive-catalog
-${project.version}
-
-
-*
-*
-
-
-
-
 
 org.apache.spark
 spark-sql_2.12
@@ -137,7 +125,6 @@ under the License.
 
 
 
org.apache.flink:flink-table-store-shade
-
org.apache.flink:flink-table-store-hive-catalog