[flink] branch master updated: [FLINK-28060][kafka] Bump Kafka to 3.2.1

2022-08-10 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new bc9b401ed1f [FLINK-28060][kafka] Bump Kafka to 3.2.1
bc9b401ed1f is described below

commit bc9b401ed1f2e7257c7b44c9838e34ede9c52ed5
Author: Chesnay Schepler 
AuthorDate: Wed Aug 10 12:13:55 2022 +0200

[FLINK-28060][kafka] Bump Kafka to 3.2.1
---
 flink-connectors/flink-connector-kafka/pom.xml| 2 +-
 .../flink-sql-connector-kafka/src/main/resources/META-INF/NOTICE  | 2 +-
 flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml| 4 ++--
 .../java/org/apache/flink/tests/util/kafka/SQLClientKafkaITCase.java  | 2 +-
 flink-end-to-end-tests/test-scripts/test_confluent_schema_registry.sh | 2 +-
 flink-end-to-end-tests/test-scripts/test_pyflink.sh   | 2 +-
 flink-end-to-end-tests/test-scripts/test_sql_client.sh| 2 +-
 .../src/main/resources/META-INF/NOTICE| 2 +-
 flink-formats/flink-avro-confluent-registry/pom.xml   | 2 +-
 9 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/flink-connectors/flink-connector-kafka/pom.xml 
b/flink-connectors/flink-connector-kafka/pom.xml
index c95bbdc45e4..1336e3e6230 100644
--- a/flink-connectors/flink-connector-kafka/pom.xml
+++ b/flink-connectors/flink-connector-kafka/pom.xml
@@ -35,7 +35,7 @@ under the License.
jar
 

-   3.1.1
+   3.2.1

 

diff --git 
a/flink-connectors/flink-sql-connector-kafka/src/main/resources/META-INF/NOTICE 
b/flink-connectors/flink-sql-connector-kafka/src/main/resources/META-INF/NOTICE
index bfbf75b85db..e760f56bf50 100644
--- 
a/flink-connectors/flink-sql-connector-kafka/src/main/resources/META-INF/NOTICE
+++ 
b/flink-connectors/flink-sql-connector-kafka/src/main/resources/META-INF/NOTICE
@@ -6,4 +6,4 @@ The Apache Software Foundation (http://www.apache.org/).
 
 This project bundles the following dependencies under the Apache Software 
License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt)
 
-- org.apache.kafka:kafka-clients:3.1.1
+- org.apache.kafka:kafka-clients:3.2.1
diff --git a/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml 
b/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml
index efb7c72c86c..cf2c70e5fea 100644
--- a/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml
+++ b/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml
@@ -76,7 +76,7 @@ under the License.

org.apache.kafka
kafka-clients
-   3.1.1
+   3.2.1

 

[flink-kubernetes-operator] branch main updated: [FLINK-28869] Emit a warning event for ClusterDeploymentException

2022-08-10 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 0267c916 [FLINK-28869] Emit a warning event for 
ClusterDeploymentException
0267c916 is described below

commit 0267c9166b3523edbb209112012af9f92a94fb16
Author: Xin Hao 
AuthorDate: Wed Aug 10 18:31:43 2022 +0800

[FLINK-28869] Emit a warning event for ClusterDeploymentException
---
 .../controller/FlinkDeploymentController.java|  6 ++
 .../controller/FlinkDeploymentControllerTest.java| 20 
 2 files changed, 26 insertions(+)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
index a43d9ccd..48fad9d7 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
@@ -116,6 +116,12 @@ public class FlinkDeploymentController
 } catch (DeploymentFailedException dfe) {
 handleDeploymentFailed(flinkApp, dfe);
 } catch (Exception e) {
+eventRecorder.triggerEvent(
+flinkApp,
+EventRecorder.Type.Warning,
+"ClusterDeploymentException",
+e.getMessage(),
+EventRecorder.Component.JobManagerDeployment);
 throw new ReconciliationException(e);
 }
 
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
index d80f8cb7..298da8fd 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
@@ -890,6 +890,26 @@ public class FlinkDeploymentControllerTest {
 assertTrue(event.getMessage().startsWith("Job parallelism "));
 }
 
+@Test
+public void testEventOfNonDeploymentFailedException() throws Exception {
+assertTrue(testController.events().isEmpty());
+var flinkDeployment = TestUtils.buildApplicationCluster();
+
+flinkService.setDeployFailure(true);
+try {
+testController.reconcile(flinkDeployment, context);
+fail();
+} catch (Exception expected) {
+}
+assertEquals(2, testController.events().size());
+
+var event = testController.events().remove();
+assertEquals("Submit", event.getReason());
+event = testController.events().remove();
+assertEquals("ClusterDeploymentException", event.getReason());
+assertEquals("Deployment failure", event.getMessage());
+}
+
 @Test
 public void cleanUpNewDeployment() {
 FlinkDeployment flinkDeployment = TestUtils.buildApplicationCluster();



[flink] branch master updated: [hotfix][python] Move json/avro/csv SerializationSchema implementations into the corresponding files

2022-08-10 Thread dianfu
This is an automated email from the ASF dual-hosted git repository.

dianfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 07858933aac [hotfix][python] Move json/avro/csv SerializationSchema 
implementations into the corresponding files
07858933aac is described below

commit 07858933aac71715f9f39901cab7716d298dd28f
Author: Dian Fu 
AuthorDate: Wed Aug 10 11:29:52 2022 +0800

[hotfix][python] Move json/avro/csv SerializationSchema implementations 
into the corresponding files
---
 .../docs/connectors/datastream/filesystem.md   |   2 +-
 .../docs/connectors/datastream/formats/csv.md  |   4 +-
 .../python/datastream/intro_to_datastream_api.md   |   4 +-
 .../docs/connectors/datastream/filesystem.md   |   2 +-
 .../docs/connectors/datastream/formats/csv.md  |   4 +-
 .../python/datastream/intro_to_datastream_api.md   |   4 +-
 .../python/datastream/data_stream_job.py   |   3 +-
 flink-python/pyflink/common/__init__.py|  50 ++-
 flink-python/pyflink/common/io.py  |  32 ++
 flink-python/pyflink/common/serialization.py   | 345 ++---
 .../common/tests/test_serialization_schemas.py |  86 +
 flink-python/pyflink/common/utils.py   |  25 ++
 flink-python/pyflink/datastream/__init__.py|  17 +-
 .../pyflink/datastream/connectors/file_system.py   |  42 +--
 .../datastream/connectors/tests/test_kafka.py  |   7 +-
 .../datastream/connectors/tests/test_rabbitmq.py   |   3 +-
 flink-python/pyflink/datastream/formats/avro.py|  99 +-
 flink-python/pyflink/datastream/formats/csv.py | 170 --
 flink-python/pyflink/datastream/formats/json.py| 150 +
 flink-python/pyflink/datastream/formats/orc.py |  24 +-
 flink-python/pyflink/datastream/formats/parquet.py |  37 ++-
 .../pyflink/datastream/formats/tests/test_avro.py  |   6 +-
 .../pyflink/datastream/formats/tests/test_csv.py   |  56 +++-
 .../pyflink/datastream/formats/tests/test_json.py  |  57 
 .../datastream/formats/tests/test_parquet.py   |   4 +-
 .../datastream/stream_execution_environment.py |   7 +-
 .../tests/test_stream_execution_environment.py |   2 +-
 flink-python/pyflink/datastream/utils.py   |   9 -
 .../datastream/connectors/kafka_avro_format.py |   3 +-
 .../datastream/connectors/kafka_csv_format.py  |   4 +-
 .../datastream/connectors/kafka_json_format.py |   3 +-
 31 files changed, 709 insertions(+), 552 deletions(-)

diff --git a/docs/content.zh/docs/connectors/datastream/filesystem.md 
b/docs/content.zh/docs/connectors/datastream/filesystem.md
index d48e84f52d8..20d7a0bee38 100644
--- a/docs/content.zh/docs/connectors/datastream/filesystem.md
+++ b/docs/content.zh/docs/connectors/datastream/filesystem.md
@@ -570,7 +570,7 @@ data_stream = ...
 
 avro_type_info = GenericRecordAvroTypeInfo(schema)
 sink = FileSink \
-.for_bulk_format(OUTPUT_BASE_PATH, AvroWriters.for_generic_record(schema)) 
\
+.for_bulk_format(OUTPUT_BASE_PATH, 
AvroBulkWriters.for_generic_record(schema)) \
 .build()
 
 # 必须通过 map 操作来指定其 Avro 类型信息,用于数据的序列化
diff --git a/docs/content.zh/docs/connectors/datastream/formats/csv.md 
b/docs/content.zh/docs/connectors/datastream/formats/csv.md
index 568757f7137..2d3ef10455f 100644
--- a/docs/content.zh/docs/connectors/datastream/formats/csv.md
+++ b/docs/content.zh/docs/connectors/datastream/formats/csv.md
@@ -138,7 +138,7 @@ The corresponding CSV file:
 
 Similarly to the `TextLineInputFormat`, `CsvReaderFormat` can be used in both 
continues and batch modes (see [TextLineInputFormat]({{< ref 
"docs/connectors/datastream/formats/text_files" >}})  for examples).
 
-For PyFlink users, `CsvBulkWriter` could be used to create `BulkWriterFactory` 
to write `Row` records to files in CSV format.
+For PyFlink users, `CsvBulkWriters` could be used to create 
`BulkWriterFactory` to write `Row` records to files in CSV format.
 It should be noted that if the preceding operator of sink is an operator which 
produces `RowData` records, e.g. CSV source, it needs to be converted to `Row` 
records before writing to sink.
 ```python
 schema = CsvSchema.builder()
@@ -148,7 +148,7 @@ schema = CsvSchema.builder()
 .build()
 
 sink = FileSink.for_bulk_format(
-OUTPUT_DIR, CsvBulkWriter.for_schema(schema)).build()
+OUTPUT_DIR, CsvBulkWriters.for_schema(schema)).build()
 
 # If ds is a source stream producing RowData records, a map could be added to 
help converting RowData records into Row records.
 ds.map(lambda e: e, output_type=schema.get_type_info()).sink_to(sink)
diff --git 
a/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md 
b/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md
index 5eefc0376b2..d87e1f4ee2e 100644
--- a/docs/content.zh/docs/dev/python/datastream/intro_to_datastream_api.md
+++ b/docs/conten

[flink] branch master updated: [FLINK-28895][python] Perform RowRowConverter automically when writing RowData into sink

2022-08-10 Thread dianfu
This is an automated email from the ASF dual-hosted git repository.

dianfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 6844be2128d [FLINK-28895][python] Perform RowRowConverter automically 
when writing RowData into sink
6844be2128d is described below

commit 6844be2128dd4c6ec62fe6e0b5230d885393e1ce
Author: Juntao Hu 
AuthorDate: Wed Aug 10 10:21:38 2022 +0800

[FLINK-28895][python] Perform RowRowConverter automically when writing 
RowData into sink

This closes #20525.
---
 .../docs/connectors/datastream/filesystem.md   | 23 +++---
 .../docs/connectors/datastream/formats/csv.md  | 15 +++
 .../docs/connectors/datastream/filesystem.md   | 23 +++---
 .../docs/connectors/datastream/formats/csv.md  |  7 +---
 flink-python/pom.xml   |  4 ++
 flink-python/pyflink/datastream/__init__.py|  2 +-
 .../pyflink/datastream/connectors/file_system.py   | 18 +++-
 flink-python/pyflink/datastream/formats/csv.py | 14 +--
 flink-python/pyflink/datastream/formats/orc.py | 46 +---
 flink-python/pyflink/datastream/formats/parquet.py | 49 ++
 .../pyflink/datastream/formats/tests/test_csv.py   |  2 +-
 .../pyflink/datastream/formats/tests/test_orc.py   | 32 --
 .../datastream/formats/tests/test_parquet.py   | 42 +++
 13 files changed, 106 insertions(+), 171 deletions(-)

diff --git a/docs/content.zh/docs/connectors/datastream/filesystem.md 
b/docs/content.zh/docs/connectors/datastream/filesystem.md
index 20d7a0bee38..80f49f4cb02 100644
--- a/docs/content.zh/docs/connectors/datastream/filesystem.md
+++ b/docs/content.zh/docs/connectors/datastream/filesystem.md
@@ -488,27 +488,22 @@ input.sinkTo(sink)
 {{< /tab >}}
 {{< /tabs >}}
 
-PyFlink 用户可以使用 `ParquetBulkWriter` 来创建一个将 `Row` 数据写入 Parquet 文件的 
`BulkWriterFactory` 。
+PyFlink 用户可以使用 `ParquetBulkWriters` 来创建一个将 `Row` 数据写入 Parquet 文件的 
`BulkWriterFactory` 。
 
 ```python
 row_type = DataTypes.ROW([
 DataTypes.FIELD('string', DataTypes.STRING()),
 DataTypes.FIELD('int_array', DataTypes.ARRAY(DataTypes.INT()))
 ])
-row_type_info = Types.ROW_NAMED(
-['string', 'int_array'],
-[Types.STRING(), Types.LIST(Types.INT())]
-)
+
 sink = FileSink.for_bulk_format(
-OUTPUT_DIR, ParquetBulkWriter.for_row_type(
+OUTPUT_DIR, ParquetBulkWriters.for_row_type(
 row_type,
 hadoop_config=Configuration(),
 utc_timestamp=True,
 )
 ).build()
-# 如果 ds 是一个输出类型为 RowData 的源数据源,可以使用一个 map 来转换为 Row 类型
-ds.map(lambda e: e, output_type=row_type_info).sink_to(sink)
-# 否则
+
 ds.sink_to(sink)
 ```
 
@@ -811,8 +806,7 @@ class PersonVectorizer(schema: String) extends 
Vectorizer[Person](schema) {
 {{< /tab >}}
 {{< /tabs >}}
 
-PyFlink 用户可以使用 `OrcBulkWriters.for_row_type` 来创建将 `Row` 数据写入 Orc 文件的 
`BulkWriterFactory` 。
-注意如果 sink 的前置算子的输出类型为 `RowData` ,例如 CSV source ,则需要先转换为 `Row` 类型。
+PyFlink 用户可以使用 `OrcBulkWriters` 来创建将数据写入 Orc 文件的 `BulkWriterFactory` 。
 
 {{< py_download_link "orc" >}}
 
@@ -821,10 +815,6 @@ row_type = DataTypes.ROW([
 DataTypes.FIELD('name', DataTypes.STRING()),
 DataTypes.FIELD('age', DataTypes.INT()),
 ])
-row_type_info = Types.ROW_NAMED(
-['name', 'age'],
-[Types.STRING(), Types.INT()]
-)
 
 sink = FileSink.for_bulk_format(
 OUTPUT_DIR,
@@ -835,9 +825,6 @@ sink = FileSink.for_bulk_format(
 )
 ).build()
 
-# 如果 ds 是产生 RowData 的数据源,可以使用一个 map 函数来指定其对应的 Row 类型。
-ds.map(lambda e: e, output_type=row_type_info).sink_to(sink)
-# 否则
 ds.sink_to(sink)
 ```
 
diff --git a/docs/content.zh/docs/connectors/datastream/formats/csv.md 
b/docs/content.zh/docs/connectors/datastream/formats/csv.md
index 2d3ef10455f..94aedd824af 100644
--- a/docs/content.zh/docs/connectors/datastream/formats/csv.md
+++ b/docs/content.zh/docs/connectors/datastream/formats/csv.md
@@ -138,20 +138,17 @@ The corresponding CSV file:
 
 Similarly to the `TextLineInputFormat`, `CsvReaderFormat` can be used in both 
continues and batch modes (see [TextLineInputFormat]({{< ref 
"docs/connectors/datastream/formats/text_files" >}})  for examples).
 
-For PyFlink users, `CsvBulkWriters` could be used to create 
`BulkWriterFactory` to write `Row` records to files in CSV format.
-It should be noted that if the preceding operator of sink is an operator which 
produces `RowData` records, e.g. CSV source, it needs to be converted to `Row` 
records before writing to sink.
+For PyFlink users, `CsvBulkWriters` could be used to create 
`BulkWriterFactory` to write records to files in CSV format.
+
 ```python
-schema = CsvSchema.builder()
-.add_number_column('id', number_type=DataTypes.BIGINT())
-.add_array_column('array', separator='#', element_type=DataTypes.INT())
-.set_column_separator(',')
+schema = CsvSchema.builder() \
+.add_number_column('id', number_type=DataTypes.BIGINT(

[flink] branch master updated: [FLINK-28904][python][docs] Add missing connector/format documentation

2022-08-10 Thread dianfu
This is an automated email from the ASF dual-hosted git repository.

dianfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 465db25502e [FLINK-28904][python][docs] Add missing connector/format 
documentation
465db25502e is described below

commit 465db25502e6e2c59e466b05dfd3bdb28919a765
Author: Juntao Hu 
AuthorDate: Wed Aug 10 16:43:01 2022 +0800

[FLINK-28904][python][docs] Add missing connector/format documentation

This closes #20535.
---
 .../docs/connectors/datastream/filesystem.md   | 70 ++
 .../docs/connectors/datastream/firehose.md |  2 +
 .../docs/connectors/datastream/formats/json.md | 32 +-
 .../connectors/datastream/formats/text_files.md| 24 
 .../docs/connectors/datastream/kinesis.md  |  2 +
 .../docs/connectors/datastream/pulsar.md   |  2 +
 .../docs/connectors/datastream/rabbitmq.md |  2 +
 .../docs/connectors/datastream/filesystem.md   | 70 ++
 .../content/docs/connectors/datastream/firehose.md |  2 +
 .../docs/connectors/datastream/formats/json.md | 32 +-
 .../connectors/datastream/formats/text_files.md| 24 
 docs/content/docs/connectors/datastream/kinesis.md |  2 +
 docs/content/docs/connectors/datastream/pulsar.md  |  2 +
 .../content/docs/connectors/datastream/rabbitmq.md |  2 +
 docs/data/sql_connectors.yml   | 17 ++
 .../pyflink/datastream/connectors/file_system.py   |  8 +--
 16 files changed, 287 insertions(+), 6 deletions(-)

diff --git a/docs/content.zh/docs/connectors/datastream/filesystem.md 
b/docs/content.zh/docs/connectors/datastream/filesystem.md
index 80f49f4cb02..8db49e928b0 100644
--- a/docs/content.zh/docs/connectors/datastream/filesystem.md
+++ b/docs/content.zh/docs/connectors/datastream/filesystem.md
@@ -74,6 +74,15 @@ FileSource.forRecordStreamFormat(StreamFormat,Path...);
 FileSource.forBulkFileFormat(BulkFormat,Path...);
 ```
 {{< /tab >}}
+{{< tab "Python" >}}
+```python
+# 从文件流中读取文件内容
+FileSource.for_record_stream_format(stream_format, *path)
+
+# 从文件中一次读取一批记录
+FileSource.for_bulk_file_format(bulk_format, *path)
+```
+{{< /tab >}}
 {{< /tabs >}}
 
 可以通过创建 `FileSource.FileSourceBuilder` 设置 File Source 的所有参数。
@@ -93,6 +102,13 @@ final FileSource source =
 .build();
 ```
 {{< /tab >}}
+{{< tab "Python" >}}
+```python
+source = FileSource.for_record_stream_format(...) \
+.monitor_continously(Duration.of_millis(5)) \
+.build()
+```
+{{< /tab >}}
 {{< /tabs >}}
 
 
@@ -351,6 +367,19 @@ val sink: FileSink[String] = FileSink
 
 input.sinkTo(sink)
 
+```
+{{< /tab >}}
+{{< tab "Python" >}}
+```python
+data_stream = ...
+
+sink = FileSink \
+.for_row_format(OUTPUT_PATH, Encoder.simple_string_encoder("UTF-8")) \
+.with_rolling_policy(RollingPolicy.default_rolling_policy(
+part_size=1024 ** 3, rollover_interval=15 * 60 * 1000, 
inactivity_interval=5 * 60 * 1000)) \
+.build()
+
+data_stream.sink_to(sink)
 ```
 {{< /tab >}}
 {{< /tabs >}}
@@ -902,6 +931,10 @@ Flink 内置了两种 BucketAssigners:
 - `DateTimeBucketAssigner` :默认的基于时间的分配器
 - `BasePathBucketAssigner` :分配所有文件存储在基础路径上(单个全局桶)
 
+{{< hint info >}}
+PyFlink 只支持 `DateTimeBucketAssigner` 和 `BasePathBucketAssigner` 。
+{{< /hint >}}
+
 
 
 ### 滚动策略
@@ -915,6 +948,10 @@ Flink 内置了两种 RollingPolicies:
 - `DefaultRollingPolicy`
 - `OnCheckpointRollingPolicy`
 
+{{< hint info >}}
+PyFlink 只支持 `DefaultRollingPolicy` 和 `OnCheckpointRollingPolicy` 。
+{{< /hint >}}
+
 
 
 ### Part 文件生命周期
@@ -1033,6 +1070,22 @@ val sink = FileSink

 ```
 {{< /tab >}}
+{{< tab "Python" >}}
+```python
+config = OutputFileConfig \
+.builder() \
+.with_part_prefix("prefix") \
+.with_part_suffix(".ext") \
+.build()
+
+sink = FileSink \
+.for_row_format(OUTPUT_PATH, Encoder.simple_string_encoder("UTF-8")) \
+.with_bucket_assigner(BucketAssigner.base_path_bucket_assigner()) \
+.with_rolling_policy(RollingPolicy.on_checkpoint_rolling_policy()) \
+.with_output_file_config(config) \
+.build()
+```
+{{< /tab >}}
 {{< /tabs >}}
 
 
@@ -1078,6 +1131,19 @@ val fileSink: FileSink[Integer] =
 
 ```
 {{< /tab >}}
+{{< tab "Python" >}}
+```python
+file_sink = FileSink \
+.for_row_format(PATH, Encoder.simple_string_encoder()) \
+.enable_compact(
+FileCompactStrategy.builder()
+.set_size_threshold(1024)
+.enable_compaction_on_checkpoint(5)
+.build(),
+FileCompactor.concat_file_compactor()) \
+.build()
+```
+{{< /tab >}}
 {{< /tabs >}}
 
 这一功能开启后,在文件转为 `pending` 状态与文件最终提交之间会进行文件合并。这些 `pending` 状态的文件将首先被提交为一个以 `.` 开头的
@@ -1105,6 +1171,10 @@ val fileSink: FileSink[Integer] =
 **注意事项2** 如果启用了文件合并功能,文件可见的时间会被延长。
 {{< /hint >}}
 
+{{< hint info >}}
+PyFlink 只支持 `ConcatFileCompactor` 和 `IdenticalFileCompactor` 。
+{{< /hint >}}
+
 
 
 ### 重要注意

[flink-connector-elasticsearch] branch main updated: [FLINK-28904][python][docs] Add fat jar link for ES (#27)

2022-08-10 Thread dianfu
This is an automated email from the ASF dual-hosted git repository.

dianfu pushed a commit to branch main
in repository 
https://gitbox.apache.org/repos/asf/flink-connector-elasticsearch.git


The following commit(s) were added to refs/heads/main by this push:
 new 9bc0c89  [FLINK-28904][python][docs] Add fat jar link for ES (#27)
9bc0c89 is described below

commit 9bc0c896d0a662094efc4a20853f79eb90eb3116
Author: Juntao Hu 
AuthorDate: Wed Aug 10 19:47:53 2022 +0800

[FLINK-28904][python][docs] Add fat jar link for ES (#27)
---
 docs/content.zh/docs/connectors/datastream/elasticsearch.md | 2 ++
 docs/content/docs/connectors/datastream/elasticsearch.md| 2 ++
 2 files changed, 4 insertions(+)

diff --git a/docs/content.zh/docs/connectors/datastream/elasticsearch.md 
b/docs/content.zh/docs/connectors/datastream/elasticsearch.md
index 1f7106d..80dd9d0 100644
--- a/docs/content.zh/docs/connectors/datastream/elasticsearch.md
+++ b/docs/content.zh/docs/connectors/datastream/elasticsearch.md
@@ -51,6 +51,8 @@ under the License.
   
 
 
+{{< py_download_link "elastic" >}}
+
 请注意,流连接器目前不是二进制发行版的一部分。
 有关如何将程序和用于集群执行的库一起打包,参考[此文档]({{< ref "docs/dev/configuration/overview" >}})。
 
diff --git a/docs/content/docs/connectors/datastream/elasticsearch.md 
b/docs/content/docs/connectors/datastream/elasticsearch.md
index 34bdfb3..77b85e5 100644
--- a/docs/content/docs/connectors/datastream/elasticsearch.md
+++ b/docs/content/docs/connectors/datastream/elasticsearch.md
@@ -53,6 +53,8 @@ of the Elasticsearch installation:
   
 
 
+{{< py_download_link "elastic" >}}
+
 Note that the streaming connectors are currently not part of the binary
 distribution. See [here]({{< ref "docs/dev/configuration/overview" >}}) for 
information
 about how to package the program with the libraries for cluster execution.



[flink-kubernetes-operator] branch main updated: [FLINK-28694] Set pipeline.name to resource name by default for application deployments

2022-08-10 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 0a1a68c9 [FLINK-28694] Set pipeline.name to resource name by default 
for application deployments
0a1a68c9 is described below

commit 0a1a68c9a88ad3055dcf6630e23ce89df7d23a97
Author: Nicholas Jiang 
AuthorDate: Wed Aug 10 08:27:27 2022 -0700

[FLINK-28694] Set pipeline.name to resource name by default for application 
deployments
---
 .../apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java| 3 +++
 .../flink/kubernetes/operator/config/FlinkConfigBuilderTest.java   | 2 ++
 2 files changed, 5 insertions(+)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
index 706d1108..fff4b71d 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
@@ -136,6 +136,9 @@ public class FlinkConfigBuilder {
 setDefaultConf(CANCEL_ENABLE, false);
 
 if (spec.getJob() != null) {
+// Set 'pipeline.name' to resource name by default for application 
deployments.
+setDefaultConf(PipelineOptions.NAME, clusterId);
+
 // With last-state upgrade mode, set the default value of
 // 'execution.checkpointing.interval'
 // to 5 minutes when HA is enabled.
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
index 81423097..a29018e3 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
@@ -121,6 +121,8 @@ public class FlinkConfigBuilderTest {
 KubernetesConfigOptions.ServiceExposedType.ClusterIP,
 
configuration.get(KubernetesConfigOptions.REST_SERVICE_EXPOSED_TYPE));
 Assertions.assertEquals(false, 
configuration.get(WebOptions.CANCEL_ENABLE));
+Assertions.assertEquals(
+flinkDeployment.getMetadata().getName(), 
configuration.get(PipelineOptions.NAME));
 
 FlinkDeployment deployment = 
ReconciliationUtils.clone(flinkDeployment);
 deployment



[flink] branch master updated (465db25502e -> 3268ec6a7ce)

2022-08-10 Thread roman
This is an automated email from the ASF dual-hosted git repository.

roman pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 465db25502e [FLINK-28904][python][docs] Add missing connector/format 
documentation
 add 3268ec6a7ce [FLINK-28898][state/changelog] Fix unstable 
ChangelogRecoverySwitchStateBackendITCase#testSwitchFromEnablingToDisablingWithRescalingOut

No new revisions were added by this update.

Summary of changes:
 .../test/checkpointing/ChangelogRecoveryITCaseBase.java  | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)



[flink] branch master updated (3268ec6a7ce -> dea5f09017a)

2022-08-10 Thread lindong
This is an automated email from the ASF dual-hosted git repository.

lindong pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 3268ec6a7ce [FLINK-28898][state/changelog] Fix unstable 
ChangelogRecoverySwitchStateBackendITCase#testSwitchFromEnablingToDisablingWithRescalingOut
 add dea5f09017a [FLINK-28900] Fix RecreateOnResetOperatorCoordinatorTest 
compilation failure

No new revisions were added by this update.

Summary of changes:
 .../coordination/RecreateOnResetOperatorCoordinatorTest.java| 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)



[flink] branch release-1.14 updated (c5108f6ab07 -> 0e19d8eded5)

2022-08-10 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a change to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


from c5108f6ab07 [FLINK-28880][docs][cep] Fix wrong result of strict 
contiguity of looping patterns
 new 1c653846088 [FLINK-28094][kinesis] Removing references to Regions enum 
and instead using RegionUtils so that we include future AWS Regions as well
 new 0e19d8eded5 [FLINK-28094][kinesis][glue] Updating AWS SDK versions for 
Kinesis connectors and Glue Schema Registry formats

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 flink-connectors/flink-connector-kinesis/pom.xml   |  44 +++
 .../kinesis/proxy/DynamoDBStreamsProxy.java|   6 +-
 .../streaming/connectors/kinesis/util/AWSUtil.java |   3 +-
 .../src/main/resources/META-INF/NOTICE |  76 ++--
 .../flink-glue-schema-registry-test/pom.xml| 133 +
 .../flink-avro-glue-schema-registry/pom.xml| 110 +
 6 files changed, 149 insertions(+), 223 deletions(-)



[flink] 01/02: [FLINK-28094][kinesis] Removing references to Regions enum and instead using RegionUtils so that we include future AWS Regions as well

2022-08-10 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 1c653846088ff1ffa29d20a128a4b5d8cd0c5127
Author: Hong Teoh 
AuthorDate: Mon Aug 8 10:52:32 2022 +0100

[FLINK-28094][kinesis] Removing references to Regions enum and instead 
using RegionUtils so that we include future AWS Regions as well
---
 .../streaming/connectors/kinesis/proxy/DynamoDBStreamsProxy.java| 6 ++
 .../org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java | 3 +--
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git 
a/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/DynamoDBStreamsProxy.java
 
b/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/DynamoDBStreamsProxy.java
index 4e58d6cb7f3..65c4035fd33 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/DynamoDBStreamsProxy.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/proxy/DynamoDBStreamsProxy.java
@@ -23,8 +23,7 @@ import 
org.apache.flink.streaming.connectors.kinesis.model.StreamShardHandle;
 import com.amazonaws.ClientConfiguration;
 import com.amazonaws.ClientConfigurationFactory;
 import com.amazonaws.auth.AWSCredentialsProvider;
-import com.amazonaws.regions.Region;
-import com.amazonaws.regions.Regions;
+import com.amazonaws.regions.RegionUtils;
 import 
com.amazonaws.services.dynamodbv2.streamsadapter.AmazonDynamoDBStreamsAdapterClient;
 import com.amazonaws.services.kinesis.AmazonKinesis;
 import com.amazonaws.services.kinesis.model.DescribeStreamResult;
@@ -91,8 +90,7 @@ public class DynamoDBStreamsProxy extends KinesisProxy {
 if (configProps.containsKey(AWS_ENDPOINT)) {
 adapterClient.setEndpoint(configProps.getProperty(AWS_ENDPOINT));
 } else {
-adapterClient.setRegion(
-
Region.getRegion(Regions.fromName(configProps.getProperty(AWS_REGION;
+
adapterClient.setRegion(RegionUtils.getRegion(configProps.getProperty(AWS_REGION)));
 }
 
 return adapterClient;
diff --git 
a/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java
 
b/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java
index 87d8f152041..f6f2fda06ce 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java
+++ 
b/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java
@@ -102,8 +102,7 @@ public class AWSUtil {
 
configProps.getProperty(AWSConfigConstants.AWS_ENDPOINT),
 
configProps.getProperty(AWSConfigConstants.AWS_REGION)));
 } else {
-builder.withRegion(
-
Regions.fromName(configProps.getProperty(AWSConfigConstants.AWS_REGION)));
+
builder.withRegion(configProps.getProperty(AWSConfigConstants.AWS_REGION));
 }
 return builder.build();
 }



[flink] 02/02: [FLINK-28094][kinesis][glue] Updating AWS SDK versions for Kinesis connectors and Glue Schema Registry formats

2022-08-10 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 0e19d8eded511558d7f7653dc371cbea6e9472b0
Author: Danny Cranmer 
AuthorDate: Tue Aug 9 17:20:06 2022 +0100

[FLINK-28094][kinesis][glue] Updating AWS SDK versions for Kinesis 
connectors and Glue Schema Registry formats
---
 flink-connectors/flink-connector-kinesis/pom.xml   |  44 +++
 .../src/main/resources/META-INF/NOTICE |  76 ++--
 .../flink-glue-schema-registry-test/pom.xml| 133 +
 .../flink-avro-glue-schema-registry/pom.xml| 110 +
 4 files changed, 146 insertions(+), 217 deletions(-)

diff --git a/flink-connectors/flink-connector-kinesis/pom.xml 
b/flink-connectors/flink-connector-kinesis/pom.xml
index c88f9a4aca1..6bde75dbf14 100644
--- a/flink-connectors/flink-connector-kinesis/pom.xml
+++ b/flink-connectors/flink-connector-kinesis/pom.xml
@@ -33,9 +33,9 @@ under the License.
flink-connector-kinesis_${scala.binary.version}
Flink : Connectors : Kinesis

-   1.12.7
-   2.16.86
-   1.14.1
+   1.12.276
+   2.17.247
+   1.14.8
0.14.1

1.5.3
4.5.13
@@ -45,6 +45,25 @@ under the License.
 
jar
 
+   
+   
+   
+   com.amazonaws
+   aws-java-sdk-bom
+   ${aws.sdk.version}
+   pom
+   import
+   
+   
+   software.amazon.awssdk
+   bom
+   ${aws.sdkv2.version}
+   pom
+   import
+   
+   
+   
+



@@ -58,32 +77,26 @@ under the License.

com.amazonaws
aws-java-sdk-kinesis
-   ${aws.sdk.version}


com.amazonaws
aws-java-sdk-sts
-   ${aws.sdk.version}


com.amazonaws
aws-java-sdk-kms
-   ${aws.sdk.version}


com.amazonaws
aws-java-sdk-s3
-   ${aws.sdk.version}


com.amazonaws
aws-java-sdk-dynamodb
-   ${aws.sdk.version}


com.amazonaws
aws-java-sdk-cloudwatch
-   ${aws.sdk.version}


com.amazonaws
@@ -94,16 +107,6 @@ under the License.
com.amazonaws
amazon-kinesis-client
${aws.kinesis-kcl.version}
-   
-   
-   
-   com.amazonaws
-   
aws-java-sdk-cloudwatch
-   
-   


com.amazonaws
@@ -212,19 +215,16 @@ under the License.

software.amazon.awssdk
kinesis
-   ${aws.sdkv2.version}

 

software.amazon.awssdk
netty-nio-client
-   ${aws.sdkv2.version}

 

software.amazon.awssdk
sts
-   ${aws.sdkv2.version}

 

diff --git 
a/flink-connectors/flink-connector-kinesis/src/main/resources/META-INF/NOTICE 
b/flink-connectors/flink-connector-kinesis/src/main/resources/META-INF/NOTICE
index 9439639c177..b79ca202a7b 100644
--- 
a/flink-connectors/flink-connector-kinesis/src/main/resources/META-INF/NOTICE
+++ 
b/flink-connectors/flink-connector-kinesis/src/main/resources/META-INF/NOTICE
@@ -6,48 +6,52 @@ The Apache Software Foundation (http://www.apache.org/).
 
 This project bundles the following dependencies under the Apache Software 
License 2.0. (http://www.apache.org/licenses/LICENSE-2.0.txt) 
 
-- com.amazonaws:amazon-kinesis-client:1.14.1
+- com.amazonaws:amazon-kinesis-client:1.14.8
 - com.amazonaws:amazon-kinesis-producer:0.14.1
-- com.amazonaws:aws-java-sdk-core:1.12.7
-- com.amazonaws:aws-java-sdk-dynamodb:1.12.7
-- com.amazonaws:a

[flink-table-store] branch master updated: [FLINK-28754] Document that Java 8 is required to build table store

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new d29f41a2 [FLINK-28754] Document that Java 8 is required to build table 
store
d29f41a2 is described below

commit d29f41a2e4cabe37083ee0faf0bbd776db7fd9d8
Author: Nicholas Jiang 
AuthorDate: Thu Aug 11 12:38:31 2022 +0800

[FLINK-28754] Document that Java 8 is required to build table store

This closes #264
---
 docs/content/docs/engines/build.md | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/docs/content/docs/engines/build.md 
b/docs/content/docs/engines/build.md
index 3625a637..8c6c1499 100644
--- a/docs/content/docs/engines/build.md
+++ b/docs/content/docs/engines/build.md
@@ -24,9 +24,13 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Build From Source
+# Build from Source
 
-Clone from git, enter:
+In order to build the Flink Table Store you need the source code. Either 
[download the source of a release]({{< downloads >}}) or [clone the git 
repository]({{< github_repo >}}).
+
+In addition, you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
Table Store requires **Java 8** to build.
+
+To clone from git, enter:
 
 ```bash
 git clone {{< github_repo >}}



[flink-table-store] branch release-0.2 updated: [FLINK-28754] Document that Java 8 is required to build table store

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new 3f3f7ece [FLINK-28754] Document that Java 8 is required to build table 
store
3f3f7ece is described below

commit 3f3f7ece3563a19654882db066a4c79e44c67dfe
Author: Nicholas Jiang 
AuthorDate: Thu Aug 11 12:38:31 2022 +0800

[FLINK-28754] Document that Java 8 is required to build table store

This closes #264
---
 docs/content/docs/engines/build.md | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/docs/content/docs/engines/build.md 
b/docs/content/docs/engines/build.md
index 3625a637..8c6c1499 100644
--- a/docs/content/docs/engines/build.md
+++ b/docs/content/docs/engines/build.md
@@ -24,9 +24,13 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Build From Source
+# Build from Source
 
-Clone from git, enter:
+In order to build the Flink Table Store you need the source code. Either 
[download the source of a release]({{< downloads >}}) or [clone the git 
repository]({{< github_repo >}}).
+
+In addition, you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
Table Store requires **Java 8** to build.
+
+To clone from git, enter:
 
 ```bash
 git clone {{< github_repo >}}



[flink-table-store] branch release-0.2 updated: [hotfix] Document only Hive 2.3 is supported

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new c606a8ca [hotfix] Document only Hive 2.3 is supported
c606a8ca is described below

commit c606a8ca66bde9d2b9dc7043f865de61b646a6e0
Author: JingsongLi 
AuthorDate: Thu Aug 11 12:42:07 2022 +0800

[hotfix] Document only Hive 2.3 is supported
---
 docs/content/docs/engines/hive.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/docs/engines/hive.md 
b/docs/content/docs/engines/hive.md
index 3a8927ec..e2e28309 100644
--- a/docs/content/docs/engines/hive.md
+++ b/docs/content/docs/engines/hive.md
@@ -32,7 +32,7 @@ Table Store currently supports the following features related 
with Hive:
 
 ## Version
 
-Table Store currently supports Hive 2.x.
+Table Store currently supports Hive 2.3.
 
 ## Execution Engine
 



[flink-table-store] branch master updated: [hotfix] Document only Hive 2.3 is supported

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 94f79eba [hotfix] Document only Hive 2.3 is supported
94f79eba is described below

commit 94f79ebab2666f6e79c7653e652f2e243834cc99
Author: JingsongLi 
AuthorDate: Thu Aug 11 12:42:07 2022 +0800

[hotfix] Document only Hive 2.3 is supported
---
 docs/content/docs/engines/hive.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/docs/engines/hive.md 
b/docs/content/docs/engines/hive.md
index 3a8927ec..e2e28309 100644
--- a/docs/content/docs/engines/hive.md
+++ b/docs/content/docs/engines/hive.md
@@ -32,7 +32,7 @@ Table Store currently supports the following features related 
with Hive:
 
 ## Version
 
-Table Store currently supports Hive 2.x.
+Table Store currently supports Hive 2.3.
 
 ## Execution Engine
 



[flink-table-store] branch master updated: [FLINK-28794] Publish flink-table-store snapshot artifacts

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 49491377 [FLINK-28794] Publish flink-table-store snapshot artifacts
49491377 is described below

commit 49491377b93223e914f5656d8d2c2f7ade7999bb
Author: Jingsong Lee 
AuthorDate: Thu Aug 11 12:43:43 2022 +0800

[FLINK-28794] Publish flink-table-store snapshot artifacts

This closes #261
---
 .github/workflows/publish_snapshot.yml | 57 ++
 1 file changed, 57 insertions(+)

diff --git a/.github/workflows/publish_snapshot.yml 
b/.github/workflows/publish_snapshot.yml
new file mode 100644
index ..8eb94663
--- /dev/null
+++ b/.github/workflows/publish_snapshot.yml
@@ -0,0 +1,57 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+name: Publish Snapshot
+
+on:
+  schedule:
+# At the end of every day
+- cron: '0 0 * * *'
+  workflow_dispatch:
+jobs:
+  publish-snapshot:
+if: github.repository == 'apache/flink-table-store'
+runs-on: ubuntu-latest
+steps:
+  - name: Checkout code
+uses: actions/checkout@v2
+  - name: Set up JDK 1.8
+uses: actions/setup-java@v1
+with:
+  java-version: 1.8
+  - name: Cache local Maven repository
+uses: actions/cache@v3
+with:
+  path: ~/.m2/repository
+  key: snapshot-maven-${{ hashFiles('**/pom.xml') }}
+  restore-keys: |
+snapshot-maven-
+  - name: Publish snapshot
+env:
+  ASF_USERNAME: ${{ secrets.NEXUS_USER }}
+  ASF_PASSWORD: ${{ secrets.NEXUS_PW }}
+run: |
+  tmp_settings="tmp-settings.xml"
+  echo "" > $tmp_settings
+  echo 
"apache.snapshots.https$ASF_USERNAME" >> 
$tmp_settings
+  echo "$ASF_PASSWORD" >> $tmp_settings
+  echo "" >> $tmp_settings
+  
+  mvn --settings $tmp_settings clean deploy -Dgpg.skip -Drat.skip 
-DskipTests -Papache-release
+  
+  rm $tmp_settings



[flink-table-store] branch master updated: [FLINK-28840] Introduce roadmap document of Flink Table Store

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new 26786fad [FLINK-28840] Introduce roadmap document of Flink Table Store
26786fad is described below

commit 26786fad4df2fb889bc339cf93e5ce08e3ee8652
Author: Nicholas Jiang 
AuthorDate: Thu Aug 11 12:50:28 2022 +0800

[FLINK-28840] Introduce roadmap document of Flink Table Store

This closes #263
---
 docs/content/docs/development/roadmap.md | 40 
 1 file changed, 40 insertions(+)

diff --git a/docs/content/docs/development/roadmap.md 
b/docs/content/docs/development/roadmap.md
new file mode 100644
index ..f88ce6bf
--- /dev/null
+++ b/docs/content/docs/development/roadmap.md
@@ -0,0 +1,40 @@
+---
+title: "Roadmap"
+weight: 8
+type: docs
+aliases:
+- /development/roadmap.html
+---
+
+
+# Flink Table Store Roadmap
+
+This page is intended to present an overview of the general direction of the 
Flink Table Store project, and the larger new features on our radar.
+It's not a comprehensive list and might be slightly outdated at any given 
time. Please check JIRA for the actual work items and the Flink mailing lists 
for feature planning and discussion.
+
+## What’s Next?
+
+- Concurrent write support for table store
+- Changelog producer supports full-compaction and lookup
+- Pre-aggregated merge support 
[FLIP-255](https://cwiki.apache.org/confluence/display/FLINK/FLIP-255+Introduce+pre-aggregated+merge+to+Table+Store)
+- Lookup dim join support
+- Completed schema evolution support
+- Batch writing improvement
+- Time traveling support



[flink-table-store] branch master updated: [hotfix] Fix download url in build page

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/master by this push:
 new be561130 [hotfix] Fix download url in build page
be561130 is described below

commit be561130e06ec4daa6b58b48294cce4cbc87dd42
Author: JingsongLi 
AuthorDate: Thu Aug 11 13:34:56 2022 +0800

[hotfix] Fix download url in build page
---
 docs/content/docs/engines/build.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/docs/engines/build.md 
b/docs/content/docs/engines/build.md
index 8c6c1499..c5b8d68b 100644
--- a/docs/content/docs/engines/build.md
+++ b/docs/content/docs/engines/build.md
@@ -26,7 +26,7 @@ under the License.
 
 # Build from Source
 
-In order to build the Flink Table Store you need the source code. Either 
[download the source of a release]({{< downloads >}}) or [clone the git 
repository]({{< github_repo >}}).
+In order to build the Flink Table Store you need the source code. Either 
[download the source of a release](https://flink.apache.org/downloads.html) or 
[clone the git repository]({{< github_repo >}}).
 
 In addition, you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
Table Store requires **Java 8** to build.
 



[flink-table-store] branch release-0.2 updated: [hotfix] Fix download url in build page

2022-08-10 Thread lzljs3620320
This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch release-0.2
in repository https://gitbox.apache.org/repos/asf/flink-table-store.git


The following commit(s) were added to refs/heads/release-0.2 by this push:
 new 585a4927 [hotfix] Fix download url in build page
585a4927 is described below

commit 585a49278d4b473b7d0064ea4e1ac4825ebd0fb1
Author: JingsongLi 
AuthorDate: Thu Aug 11 13:34:56 2022 +0800

[hotfix] Fix download url in build page
---
 docs/content/docs/engines/build.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/docs/engines/build.md 
b/docs/content/docs/engines/build.md
index 8c6c1499..c5b8d68b 100644
--- a/docs/content/docs/engines/build.md
+++ b/docs/content/docs/engines/build.md
@@ -26,7 +26,7 @@ under the License.
 
 # Build from Source
 
-In order to build the Flink Table Store you need the source code. Either 
[download the source of a release]({{< downloads >}}) or [clone the git 
repository]({{< github_repo >}}).
+In order to build the Flink Table Store you need the source code. Either 
[download the source of a release](https://flink.apache.org/downloads.html) or 
[clone the git repository]({{< github_repo >}}).
 
 In addition, you need **Maven 3** and a **JDK** (Java Development Kit). Flink 
Table Store requires **Java 8** to build.