[flink] branch release-1.17 updated: [FLINK-31137][hive] Fix wrong ResultKind in DescribeTable and ShowCreateTable results (#21976)

2023-02-20 Thread shengkai
This is an automated email from the ASF dual-hosted git repository.

shengkai pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new 3ae7b1f3d85 [FLINK-31137][hive] Fix wrong ResultKind in DescribeTable 
and ShowCreateTable results (#21976)
3ae7b1f3d85 is described below

commit 3ae7b1f3d85db71f0950f1166d60be76720b49f5
Author: Shengkai <33114724+fsk...@users.noreply.github.com>
AuthorDate: Tue Feb 21 14:25:35 2023 +0800

[FLINK-31137][hive] Fix wrong ResultKind in DescribeTable and 
ShowCreateTable results (#21976)
---
 .../planner/delegation/hive/HiveOperationExecutor.java|  4 ++--
 .../apache/flink/connectors/hive/HiveDialectITCase.java   | 15 +++
 2 files changed, 13 insertions(+), 6 deletions(-)

diff --git 
a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveOperationExecutor.java
 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveOperationExecutor.java
index 221621888f5..ee23c36fa90 100644
--- 
a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveOperationExecutor.java
+++ 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveOperationExecutor.java
@@ -267,7 +267,7 @@ public class HiveOperationExecutor implements 
ExtendedOperationExecutor {
 String showCreateTableString = 
HiveShowTableUtils.showCreateTable(tablePath, tbl);
 TableResultInternal resultInternal =
 TableResultImpl.builder()
-.resultKind(ResultKind.SUCCESS)
+.resultKind(ResultKind.SUCCESS_WITH_CONTENT)
 .schema(ResolvedSchema.of(Column.physical("result", 
DataTypes.STRING(
 
.data(Collections.singletonList(Row.of(showCreateTableString)))
 .build();
@@ -325,7 +325,7 @@ public class HiveOperationExecutor implements 
ExtendedOperationExecutor {
 }
 TableResultInternal tableResultInternal =
 TableResultImpl.builder()
-.resultKind(ResultKind.SUCCESS)
+.resultKind(ResultKind.SUCCESS_WITH_CONTENT)
 .schema(
 ResolvedSchema.physical(
 new String[] {"col_name", 
"data_type", "comment"},
diff --git 
a/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveDialectITCase.java
 
b/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveDialectITCase.java
index 1b10e6c93a7..d93c10ea39a 100644
--- 
a/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveDialectITCase.java
+++ 
b/flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveDialectITCase.java
@@ -20,9 +20,11 @@ package org.apache.flink.connectors.hive;
 
 import org.apache.flink.sql.parser.hive.ddl.SqlCreateHiveTable;
 import org.apache.flink.table.HiveVersionTestUtil;
+import org.apache.flink.table.api.ResultKind;
 import org.apache.flink.table.api.SqlDialect;
 import org.apache.flink.table.api.TableEnvironment;
 import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.TableResult;
 import org.apache.flink.table.api.TableSchema;
 import org.apache.flink.table.api.ValidationException;
 import org.apache.flink.table.api.internal.TableEnvironmentImpl;
@@ -1211,10 +1213,11 @@ public class HiveDialectITCase {
 "default.t1"));
 
 // show hive table
+TableResult showCreateTableT2 = tableEnv.executeSql("show create table 
t2");
+
assertThat(showCreateTableT2.getResultKind()).isEqualTo(ResultKind.SUCCESS_WITH_CONTENT);
 String actualResult =
 (String)
-CollectionUtil.iteratorToList(
-tableEnv.executeSql("show create table 
t2").collect())
+
CollectionUtil.iteratorToList(showCreateTableT2.collect())
 .get(0)
 .getField(0);
 Table table = hiveCatalog.getHiveTable(new ObjectPath("default", 
"t2"));
@@ -1264,12 +1267,16 @@ public class HiveDialectITCase {
 "create table t3(a decimal(10, 2), b double, c float) 
partitioned by (d date)");
 
 // desc non-hive table
-List result = 
CollectionUtil.iteratorToList(tableEnv.executeSql("desc t1").collect());
+TableResult descT1 = tableEnv.executeSql("desc t1");
+
assertThat(descT1.getResultKind()).isEqualTo(ResultKind.SUCCESS_WITH_CONTENT);
+List result = CollectionUtil.iteratorToList(descT1.collec

[flink] 02/02: [FLINK-21000][runtime] Add tests for the InternalSplitEnumeratorMetricGroup This closes #21952.

2023-02-20 Thread leonard
This is an automated email from the ASF dual-hosted git repository.

leonard pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit bacdc326b58749924acbd8921d63eda06663a225
Author: Hang Ruan 
AuthorDate: Fri Feb 17 10:17:31 2023 +0800

[FLINK-21000][runtime] Add tests for the InternalSplitEnumeratorMetricGroup
This closes #21952.
---
 .../groups/InternalSplitEnumeratorGroupTest.java   | 61 ++
 1 file changed, 61 insertions(+)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/InternalSplitEnumeratorGroupTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/InternalSplitEnumeratorGroupTest.java
new file mode 100644
index 000..76bab219571
--- /dev/null
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/InternalSplitEnumeratorGroupTest.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.metrics.groups;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.jobgraph.OperatorID;
+import org.apache.flink.runtime.metrics.MetricRegistry;
+import org.apache.flink.runtime.metrics.util.TestingMetricRegistry;
+
+import org.junit.jupiter.api.Test;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for the {@link InternalSplitEnumeratorMetricGroup}. */
+class InternalSplitEnumeratorGroupTest {
+
+private static final MetricRegistry registry = 
TestingMetricRegistry.builder().build();
+
+@Test
+void testGenerateScopeDefault() {
+final JobID jobId = new JobID();
+final JobVertexID jobVertexId = new JobVertexID();
+final OperatorID operatorId = new OperatorID();
+JobManagerOperatorMetricGroup jmJobGroup =
+JobManagerMetricGroup.createJobManagerMetricGroup(registry, 
"localhost")
+.addJob(jobId, "myJobName")
+.getOrAddOperator(jobVertexId, "taskName", operatorId, 
"opName");
+InternalOperatorCoordinatorMetricGroup operatorCoordinatorMetricGroup =
+new InternalOperatorCoordinatorMetricGroup(jmJobGroup);
+InternalSplitEnumeratorMetricGroup splitEnumeratorMetricGroup =
+new 
InternalSplitEnumeratorMetricGroup(operatorCoordinatorMetricGroup);
+
+assertThat(splitEnumeratorMetricGroup.getScopeComponents())
+.containsExactly(
+"localhost",
+"jobmanager",
+"myJobName",
+"opName",
+"coordinator",
+"enumerator");
+assertThat(splitEnumeratorMetricGroup.getMetricIdentifier("name"))
+
.isEqualTo("localhost.jobmanager.myJobName.opName.coordinator.enumerator.name");
+}
+}



[flink] 01/02: [FLINK-21000][runtime] Provide internal split enumerator metric group

2023-02-20 Thread leonard
This is an automated email from the ASF dual-hosted git repository.

leonard pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit f04e6de005748370ee3f409b1ddf01dfd927d14c
Author: Hang Ruan 
AuthorDate: Fri Feb 17 10:17:14 2023 +0800

[FLINK-21000][runtime] Provide internal split enumerator metric group
---
 .../apache/flink/runtime/metrics/MetricNames.java  |  3 ++
 .../groups/InternalSplitEnumeratorMetricGroup.java | 41 ++
 .../coordinator/SourceCoordinatorContext.java  |  3 +-
 3 files changed, 46 insertions(+), 1 deletion(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricNames.java 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricNames.java
index 08a9388143c..dc165e0f6f4 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricNames.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricNames.java
@@ -122,4 +122,7 @@ public class MetricNames {
 public static final String LATEST_LOAD_TIME = "latestLoadTime";
 public static final String NUM_CACHED_RECORDS = "numCachedRecords";
 public static final String NUM_CACHED_BYTES = "numCachedBytes";
+
+// FLIP-27 for split enumerator
+public static final String UNASSIGNED_SPLITS = "unassignedSplits";
 }
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/InternalSplitEnumeratorMetricGroup.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/InternalSplitEnumeratorMetricGroup.java
new file mode 100644
index 000..22370a50ab1
--- /dev/null
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/InternalSplitEnumeratorMetricGroup.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.metrics.groups;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.connector.source.SplitEnumerator;
+import org.apache.flink.metrics.Gauge;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.metrics.groups.SplitEnumeratorMetricGroup;
+import org.apache.flink.runtime.metrics.MetricNames;
+
+/** Special {@link MetricGroup} representing an {@link SplitEnumerator}. */
+@Internal
+public class InternalSplitEnumeratorMetricGroup extends 
ProxyMetricGroup
+implements SplitEnumeratorMetricGroup {
+
+public InternalSplitEnumeratorMetricGroup(MetricGroup parent) {
+super(parent.addGroup("enumerator"));
+}
+
+@Override
+public > G setUnassignedSplitsGauge(G 
unassignedSplitsGauge) {
+return parentMetricGroup.gauge(MetricNames.UNASSIGNED_SPLITS, 
unassignedSplitsGauge);
+}
+}
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java
index 59ab6d8c2d7..f99b178df4a 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java
@@ -29,6 +29,7 @@ import org.apache.flink.api.connector.source.SplitsAssignment;
 import org.apache.flink.api.connector.source.SupportsIntermediateNoMoreSplits;
 import org.apache.flink.core.io.SimpleVersionedSerializer;
 import org.apache.flink.metrics.groups.SplitEnumeratorMetricGroup;
+import 
org.apache.flink.runtime.metrics.groups.InternalSplitEnumeratorMetricGroup;
 import org.apache.flink.runtime.operators.coordination.OperatorCoordinator;
 import org.apache.flink.runtime.operators.coordination.OperatorEvent;
 import org.apache.flink.runtime.source.event.AddSplitEvent;
@@ -166,7 +167,7 @@ public class SourceCoordinatorContext
 
 @Override
 public SplitEnumeratorMetricGroup metricGroup() {
-return null;
+return new 
InternalSplitEnumeratorMetricGroup(operatorCoordinatorContext.metricGroup());
 }
 
 @Override



[flink] branch master updated (6fae8b58a49 -> bacdc326b58)

2023-02-20 Thread leonard
This is an automated email from the ASF dual-hosted git repository.

leonard pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 6fae8b58a49 [FLINK-31036][test][checkpoint] 
FileSystemCheckpointStorage as the default for test
 new f04e6de0057 [FLINK-21000][runtime] Provide internal split enumerator 
metric group
 new bacdc326b58 [FLINK-21000][runtime] Add tests for the 
InternalSplitEnumeratorMetricGroup This closes #21952.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/flink/runtime/metrics/MetricNames.java  |  3 ++
 ...ava => InternalSplitEnumeratorMetricGroup.java} | 20 +---
 .../coordinator/SourceCoordinatorContext.java  |  3 +-
 java => InternalSplitEnumeratorGroupTest.java} | 55 ++
 4 files changed, 34 insertions(+), 47 deletions(-)
 copy 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/{InternalOperatorCoordinatorMetricGroup.java
 => InternalSplitEnumeratorMetricGroup.java} (56%)
 copy 
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/{InternalOperatorCoordinatorGroupTest.java
 => InternalSplitEnumeratorGroupTest.java} (50%)



[flink-web] branch asf-site updated: Rebuild website

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ef752458d Rebuild website
ef752458d is described below

commit ef752458db81609a837fd97b604ecafa8271332a
Author: 1996fanrui <1996fan...@gmail.com>
AuthorDate: Tue Feb 21 10:30:55 2023 +0800

Rebuild website
---
 content/community.html| 6 ++
 content/zh/community.html | 6 ++
 2 files changed, 12 insertions(+)

diff --git a/content/community.html b/content/community.html
index 776692a98..5490c9346 100644
--- a/content/community.html
+++ b/content/community.html
@@ -683,6 +683,12 @@ Thanks,
 PMC, Committer
 fhueske
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
   
 https://avatars3.githubusercontent.com/u/498957?v=3&s=50"; 
class="committer-avatar" />
 Vasia Kalavri
diff --git a/content/zh/community.html b/content/zh/community.html
index a029dd664..630bedf20 100644
--- a/content/zh/community.html
+++ b/content/zh/community.html
@@ -674,6 +674,12 @@ Thanks,
 PMC, Committer
 fhueske
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
   
 https://avatars3.githubusercontent.com/u/498957?v=3&s=50"; 
class="committer-avatar" />
 Vasia Kalavri



[flink] branch master updated: [FLINK-31036][test][checkpoint] FileSystemCheckpointStorage as the default for test

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 6fae8b58a49 [FLINK-31036][test][checkpoint] 
FileSystemCheckpointStorage as the default for test
6fae8b58a49 is described below

commit 6fae8b58a49216d50e422d5e4e7be3d7ea3b4462
Author: 1996fanrui <1996fan...@gmail.com>
AuthorDate: Fri Feb 17 21:10:52 2023 +0800

[FLINK-31036][test][checkpoint] FileSystemCheckpointStorage as the default 
for test
---
 .../org/apache/flink/runtime/testutils/MiniClusterResource.java  | 9 +
 1 file changed, 9 insertions(+)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
index ba609183f1c..5282f8acb39 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.runtime.testutils;
 
+import org.apache.flink.configuration.CheckpointingOptions;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.CoreOptions;
 import org.apache.flink.configuration.HeartbeatManagerOptions;
@@ -199,6 +200,14 @@ public class MiniClusterResource extends ExternalResource {
 new 
Configuration(miniClusterResourceConfiguration.getConfiguration());
 configuration.setString(
 CoreOptions.TMP_DIRS, 
temporaryFolder.newFolder().getAbsolutePath());
+if 
(!configuration.contains(CheckpointingOptions.CHECKPOINTS_DIRECTORY)) {
+// The channel state or checkpoint file may exceed the upper limit 
of
+// JobManagerCheckpointStorage, so use FileSystemCheckpointStorage 
as
+// the default checkpoint storage for all tests.
+configuration.set(
+CheckpointingOptions.CHECKPOINTS_DIRECTORY,
+temporaryFolder.newFolder().toURI().toString());
+}
 
 // we need to set this since a lot of test expect this because 
TestBaseUtils.startCluster()
 // enabled this by default



[flink] branch release-1.17 updated: [FLINK-31036][test][checkpoint] FileSystemCheckpointStorage as the default for test

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new ae53a8de47b [FLINK-31036][test][checkpoint] 
FileSystemCheckpointStorage as the default for test
ae53a8de47b is described below

commit ae53a8de47b31e248fef216428a83c8bf3f07b13
Author: 1996fanrui <1996fan...@gmail.com>
AuthorDate: Fri Feb 17 21:10:52 2023 +0800

[FLINK-31036][test][checkpoint] FileSystemCheckpointStorage as the default 
for test
---
 .../org/apache/flink/runtime/testutils/MiniClusterResource.java  | 9 +
 1 file changed, 9 insertions(+)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
index ba609183f1c..5282f8acb39 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterResource.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.runtime.testutils;
 
+import org.apache.flink.configuration.CheckpointingOptions;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.CoreOptions;
 import org.apache.flink.configuration.HeartbeatManagerOptions;
@@ -199,6 +200,14 @@ public class MiniClusterResource extends ExternalResource {
 new 
Configuration(miniClusterResourceConfiguration.getConfiguration());
 configuration.setString(
 CoreOptions.TMP_DIRS, 
temporaryFolder.newFolder().getAbsolutePath());
+if 
(!configuration.contains(CheckpointingOptions.CHECKPOINTS_DIRECTORY)) {
+// The channel state or checkpoint file may exceed the upper limit 
of
+// JobManagerCheckpointStorage, so use FileSystemCheckpointStorage 
as
+// the default checkpoint storage for all tests.
+configuration.set(
+CheckpointingOptions.CHECKPOINTS_DIRECTORY,
+temporaryFolder.newFolder().toURI().toString());
+}
 
 // we need to set this since a lot of test expect this because 
TestBaseUtils.startCluster()
 // enabled this by default



[flink] branch master updated: [hotfix][doc] Fix documentation of table's configuration option 'table.exec.sink.keyed-shuffle' in ExecutionConfigOptions

2023-02-20 Thread lincoln
This is an automated email from the ASF dual-hosted git repository.

lincoln pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 43f419d0ecc [hotfix][doc] Fix documentation of table's configuration 
option 'table.exec.sink.keyed-shuffle' in ExecutionConfigOptions
43f419d0ecc is described below

commit 43f419d0eccba86ecc8040fa6f521148f1e358ff
Author: lincoln lee 
AuthorDate: Mon Feb 20 22:31:48 2023 +0800

[hotfix][doc] Fix documentation of table's configuration option 
'table.exec.sink.keyed-shuffle' in ExecutionConfigOptions

This closes #21959
---
 docs/layouts/shortcodes/generated/execution_config_configuration.html | 4 ++--
 .../org/apache/flink/table/api/config/ExecutionConfigOptions.java | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/execution_config_configuration.html 
b/docs/layouts/shortcodes/generated/execution_config_configuration.html
index d837fd4ab66..94dd987303f 100644
--- a/docs/layouts/shortcodes/generated/execution_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/execution_config_configuration.html
@@ -15,7 +15,7 @@
 The max number of async i/o operation that the async lookup 
join can trigger.
 
 
-table.exec.async-lookup.output-mode Batch Streaming
+table.exec.async-lookup.output-mode Batch Streaming
 ORDERED
 Enum
 Output mode for asynchronous operations which will convert to 
{@see AsyncDataStream.OutputMode}, ORDERED by default. If set to 
ALLOW_UNORDERED, will attempt to use {@see 
AsyncDataStream.OutputMode.UNORDERED} when it does not affect the correctness 
of the result, otherwise ORDERED will be still used.Possible 
values:"ORDERED""ALLOW_UNORDERED"
@@ -92,7 +92,7 @@ By default no operator is disabled.
 table.exec.sink.keyed-shuffle Streaming
 AUTO
 Enum
-In order to minimize the distributed disorder problem when 
writing data into table with primary keys that many users suffers. FLINK will 
auto add a keyed shuffle by default when the sink's parallelism differs from 
upstream operator and upstream is append only. This works only when the 
upstream ensures the multi-records' order on the primary key, if not, the added 
shuffle can not solve the problem (In this situation, a more proper way is to 
consider the deduplicate operati [...]
+In order to minimize the distributed disorder problem when 
writing data into table with primary keys that many users suffers. FLINK will 
auto add a keyed shuffle by default when the sink parallelism differs from 
upstream operator and sink parallelism is not 1. This works only when the 
upstream ensures the multi-records' order on the primary key, if not, the added 
shuffle can not solve the problem (In this situation, a more proper way is to 
consider the deduplicate operati [...]
 
 
 table.exec.sink.not-null-enforcer Batch Streaming
diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
index c14546c192f..db457dee0ff 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
@@ -160,7 +160,7 @@ public class ExecutionConfigOptions {
 Description.builder()
 .text(
 "In order to minimize the 
distributed disorder problem when writing data into table with primary keys 
that many users suffers. "
-+ "FLINK will auto add a 
keyed shuffle by default when the sink's parallelism differs from upstream 
operator and upstream is append only. "
++ "FLINK will auto add a 
keyed shuffle by default when the sink parallelism differs from upstream 
operator and sink parallelism is not 1. "
 + "This works only when 
the upstream ensures the multi-records' order on the primary key, if not, the 
added shuffle can not solve "
 + "the problem (In this 
situation, a more proper way is to consider the deduplicate operation for the 
source firstly or use an "
 + "upsert source with 
primary key definition which truly reflect the records evolution).")



[flink-web] branch asf-site updated: Add Anton Kalashnikov to the community list

2023-02-20 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ba371fb57 Add Anton Kalashnikov to the community list
ba371fb57 is described below

commit ba371fb5711973a0057aad01adc68290db8989d5
Author: Piotr Nowojski 
AuthorDate: Mon Feb 20 15:01:29 2023 +0100

Add Anton Kalashnikov to the community list
---
 community.md| 6 ++
 community.zh.md | 6 ++
 2 files changed, 12 insertions(+)

diff --git a/community.md b/community.md
index 8bb281dec..c0f638823 100644
--- a/community.md
+++ b/community.md
@@ -399,6 +399,12 @@ The list below could be outdated. Please find the most 
up-to-date list PMC, Committer
 fhueske
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
   
 https://avatars3.githubusercontent.com/u/498957?v=3&s=50"; 
class="committer-avatar">
 Vasia Kalavri
diff --git a/community.zh.md b/community.zh.md
index 2f105f001..7a7e4f8e5 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -392,6 +392,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 PMC, Committer
 fhueske
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
   
 https://avatars3.githubusercontent.com/u/498957?v=3&s=50"; 
class="committer-avatar">
 Vasia Kalavri



[flink-web] branch asf-site updated: Rebuild website

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 6658e8daa Rebuild website
6658e8daa is described below

commit 6658e8daa31b979626d18f93d8bf65cd6c925a4c
Author: fanrui <1996fan...@gmail.com>
AuthorDate: Mon Feb 20 22:18:12 2023 +0800

Rebuild website
---
 content/community.html| 12 
 content/zh/community.html | 12 
 2 files changed, 24 insertions(+)

diff --git a/content/community.html b/content/community.html
index 9b169c921..776692a98 100644
--- a/content/community.html
+++ b/content/community.html
@@ -629,6 +629,12 @@ Thanks,
 PMC, Committer, VP
 sewen
   
+  
+https://avatars2.githubusercontent.com/u/38427477?s=50"; 
class="committer-avatar" />
+Rui Fan
+Committer
+fanrui
+  
   
 https://avatars1.githubusercontent.com/u/5880972?s=50"; 
class="committer-avatar" />
 Gyula Fóra
@@ -749,6 +755,12 @@ Thanks,
 PMC, Committer
 mxm
   
+  
+https://avatars2.githubusercontent.com/u/8957547?s=50"; 
class="committer-avatar" />
+Piotr Nowojski
+PMC, Committer
+pnowojski
+  
   
 https://avatars2.githubusercontent.com/u/1941681?s=50"; 
class="committer-avatar" />
 Chiwan Park
diff --git a/content/zh/community.html b/content/zh/community.html
index 1bad5f526..a029dd664 100644
--- a/content/zh/community.html
+++ b/content/zh/community.html
@@ -620,6 +620,12 @@ Thanks,
 PMC, Committer, VP
 sewen
   
+  
+https://avatars2.githubusercontent.com/u/38427477?s=50"; 
class="committer-avatar" />
+Rui Fan
+Committer
+fanrui
+  
   
 https://avatars1.githubusercontent.com/u/5880972?s=50"; 
class="committer-avatar" />
 Gyula Fóra
@@ -740,6 +746,12 @@ Thanks,
 PMC, Committer
 mxm
   
+  
+https://avatars2.githubusercontent.com/u/8957547?s=50"; 
class="committer-avatar" />
+Piotr Nowojski
+PMC, Committer
+pnowojski
+  
   
 https://avatars2.githubusercontent.com/u/1941681?s=50"; 
class="committer-avatar" />
 Chiwan Park



[flink] branch release-1.17 updated (0b90d3c3843 -> 2046d431ec0)

2023-02-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


from 0b90d3c3843 [FLINK-31130][doc] Improve version info shown in the doc 
of SQL Gateway
 add 2046d431ec0 [FLINK-31119][tests] Use even # of slots

No new revisions were added by this update.

Summary of changes:
 .../test/java/org/apache/flink/runtime/jobmaster/JobRecoveryITCase.java | 2 +-
 .../org/apache/flink/runtime/jobmaster/TestingAbstractInvokables.java   | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)



[flink] branch master updated (afdc079465c -> d3902e70dfd)

2023-02-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from afdc079465c [FLINK-30824][hive] Add document for option 
'table.exec.hive.native-agg-function.enabled'
 add aeac88b1abe [hotfix][tests] Remove unused class
 add d3902e70dfd [FLINK-31119][tests] Use even # of slots

No new revisions were added by this update.

Summary of changes:
 .../flink/runtime/jobmaster/JobRecoveryITCase.java |  2 +-
 .../jobmaster/TestingAbstractInvokables.java   | 38 ++
 2 files changed, 3 insertions(+), 37 deletions(-)



[flink-web] 01/01: Add Anton Kalashnikov to the community list

2023-02-20 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch update-committer
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 69f0fa58601a80a8d719be8b540a24035ff26f4e
Author: Piotr Nowojski 
AuthorDate: Mon Feb 20 15:01:29 2023 +0100

Add Anton Kalashnikov to the community list
---
 community.md| 6 ++
 community.zh.md | 6 ++
 2 files changed, 12 insertions(+)

diff --git a/community.md b/community.md
index 8bb281dec..c0f638823 100644
--- a/community.md
+++ b/community.md
@@ -399,6 +399,12 @@ The list below could be outdated. Please find the most 
up-to-date list PMC, Committer
 fhueske
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
   
 https://avatars3.githubusercontent.com/u/498957?v=3&s=50"; 
class="committer-avatar">
 Vasia Kalavri
diff --git a/community.zh.md b/community.zh.md
index 2f105f001..7a7e4f8e5 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -392,6 +392,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 PMC, Committer
 fhueske
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
   
 https://avatars3.githubusercontent.com/u/498957?v=3&s=50"; 
class="committer-avatar">
 Vasia Kalavri



[flink-web] branch update-committer updated (47bde578c -> 69f0fa586)

2023-02-20 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a change to branch update-committer
in repository https://gitbox.apache.org/repos/asf/flink-web.git


 discard 47bde578c Add Anton Kalashnikov to the community list
 new 69f0fa586 Add Anton Kalashnikov to the community list

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (47bde578c)
\
 N -- N -- N   refs/heads/update-committer (69f0fa586)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community.md| 12 ++--
 community.zh.md | 12 ++--
 2 files changed, 12 insertions(+), 12 deletions(-)



[flink-web] 01/01: Add Anton Kalashnikov to the community list

2023-02-20 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch update-committer
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 47bde578cca80226790e78a6f1d90cc1b831db40
Author: Piotr Nowojski 
AuthorDate: Mon Feb 20 15:01:29 2023 +0100

Add Anton Kalashnikov to the community list
---
 community.md| 6 ++
 community.zh.md | 6 ++
 2 files changed, 12 insertions(+)

diff --git a/community.md b/community.md
index 8bb281dec..8d2fe0e6f 100644
--- a/community.md
+++ b/community.md
@@ -723,6 +723,12 @@ The list below could be outdated. Please find the most 
up-to-date list Committer
 lincoln
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
 
 
 You can reach committers directly at `@apache.org`. A list of all 
contributors can be found [here]({{ site.FLINK_CONTRIBUTORS_URL }}).
diff --git a/community.zh.md b/community.zh.md
index 2f105f001..4e6408d92 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -704,6 +704,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 Committer
 lincoln
   
+  
+https://avatars.githubusercontent.com/u/3996532?s=50"; 
class="committer-avatar" />
+Anton Kalashnikov
+Committer
+akalash
+  
 
 
 可以通过 `@apache.org` 直接联系 committer。可以在 [这里]({{ 
site.FLINK_CONTRIBUTORS_URL }}) 找到所有的贡献者。



[flink-web] branch update-committer created (now 47bde578c)

2023-02-20 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a change to branch update-committer
in repository https://gitbox.apache.org/repos/asf/flink-web.git


  at 47bde578c Add Anton Kalashnikov to the community list

This branch includes the following new commits:

 new 47bde578c Add Anton Kalashnikov to the community list

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[flink-web] branch asf-site updated: Add Piotr Nowojski and Rui Fan to the community list (#609)

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new efadb81e9 Add Piotr Nowojski and Rui Fan to the community list (#609)
efadb81e9 is described below

commit efadb81e93fbba0ccbb089011f72fbe814ebfdf4
Author: Rui Fan 
AuthorDate: Mon Feb 20 21:54:45 2023 +0800

Add Piotr Nowojski and Rui Fan to the community list (#609)
---
 community.md| 12 
 community.zh.md | 12 
 2 files changed, 24 insertions(+)

diff --git a/community.md b/community.md
index 72d55f7eb..8bb281dec 100644
--- a/community.md
+++ b/community.md
@@ -345,6 +345,12 @@ The list below could be outdated. Please find the most 
up-to-date list PMC, Committer, VP
 sewen
   
+  
+https://avatars2.githubusercontent.com/u/38427477?s=50"; 
class="committer-avatar">
+Rui Fan
+Committer
+fanrui
+  
   
 https://avatars1.githubusercontent.com/u/5880972?s=50"; 
class="committer-avatar">
 Gyula Fóra
@@ -465,6 +471,12 @@ The list below could be outdated. Please find the most 
up-to-date list PMC, Committer
 mxm
   
+  
+https://avatars2.githubusercontent.com/u/8957547?s=50"; 
class="committer-avatar">
+Piotr Nowojski
+PMC, Committer
+pnowojski
+  
   
 https://avatars2.githubusercontent.com/u/1941681?s=50"; 
class="committer-avatar">
 Chiwan Park
diff --git a/community.zh.md b/community.zh.md
index 3505521f0..2f105f001 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -338,6 +338,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 PMC, Committer, VP
 sewen
   
+  
+https://avatars2.githubusercontent.com/u/38427477?s=50"; 
class="committer-avatar">
+Rui Fan
+Committer
+fanrui
+  
   
 https://avatars1.githubusercontent.com/u/5880972?s=50"; 
class="committer-avatar">
 Gyula Fóra
@@ -458,6 +464,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 PMC, Committer
 mxm
   
+  
+https://avatars2.githubusercontent.com/u/8957547?s=50"; 
class="committer-avatar">
+Piotr Nowojski
+PMC, Committer
+pnowojski
+  
   
 https://avatars2.githubusercontent.com/u/1941681?s=50"; 
class="committer-avatar">
 Chiwan Park



[flink-web] branch update-committer created (now 020ae90cd)

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a change to branch update-committer
in repository https://gitbox.apache.org/repos/asf/flink-web.git


  at 020ae90cd Add Piotr Nowojski and Rui Fan to the community list

This branch includes the following new commits:

 new 020ae90cd Add Piotr Nowojski and Rui Fan to the community list

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[flink-web] 01/01: Add Piotr Nowojski and Rui Fan to the community list

2023-02-20 Thread fanrui
This is an automated email from the ASF dual-hosted git repository.

fanrui pushed a commit to branch update-committer
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 020ae90cd51d7002c6105bc402aca22f8bc36dda
Author: fanrui <1996fan...@gmail.com>
AuthorDate: Mon Feb 20 21:34:33 2023 +0800

Add Piotr Nowojski and Rui Fan to the community list
---
 community.md| 12 
 community.zh.md | 12 
 2 files changed, 24 insertions(+)

diff --git a/community.md b/community.md
index 72d55f7eb..8bb281dec 100644
--- a/community.md
+++ b/community.md
@@ -345,6 +345,12 @@ The list below could be outdated. Please find the most 
up-to-date list PMC, Committer, VP
 sewen
   
+  
+https://avatars2.githubusercontent.com/u/38427477?s=50"; 
class="committer-avatar">
+Rui Fan
+Committer
+fanrui
+  
   
 https://avatars1.githubusercontent.com/u/5880972?s=50"; 
class="committer-avatar">
 Gyula Fóra
@@ -465,6 +471,12 @@ The list below could be outdated. Please find the most 
up-to-date list PMC, Committer
 mxm
   
+  
+https://avatars2.githubusercontent.com/u/8957547?s=50"; 
class="committer-avatar">
+Piotr Nowojski
+PMC, Committer
+pnowojski
+  
   
 https://avatars2.githubusercontent.com/u/1941681?s=50"; 
class="committer-avatar">
 Chiwan Park
diff --git a/community.zh.md b/community.zh.md
index 3505521f0..2f105f001 100644
--- a/community.zh.md
+++ b/community.zh.md
@@ -338,6 +338,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 PMC, Committer, VP
 sewen
   
+  
+https://avatars2.githubusercontent.com/u/38427477?s=50"; 
class="committer-avatar">
+Rui Fan
+Committer
+fanrui
+  
   
 https://avatars1.githubusercontent.com/u/5880972?s=50"; 
class="committer-avatar">
 Gyula Fóra
@@ -458,6 +464,12 @@ Flink Forward 大会每年都会在世界的不同地方举办。关于大会最
 PMC, Committer
 mxm
   
+  
+https://avatars2.githubusercontent.com/u/8957547?s=50"; 
class="committer-avatar">
+Piotr Nowojski
+PMC, Committer
+pnowojski
+  
   
 https://avatars2.githubusercontent.com/u/1941681?s=50"; 
class="committer-avatar">
 Chiwan Park



[flink] branch master updated: [FLINK-30824][hive] Add document for option 'table.exec.hive.native-agg-function.enabled'

2023-02-20 Thread godfrey
This is an automated email from the ASF dual-hosted git repository.

godfrey pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new afdc079465c [FLINK-30824][hive] Add document for option 
'table.exec.hive.native-agg-function.enabled'
afdc079465c is described below

commit afdc079465c393d98bf2b3607a75b1fc9d58d281
Author: Ron 
AuthorDate: Mon Feb 20 20:52:47 2023 +0800

[FLINK-30824][hive] Add document for option 
'table.exec.hive.native-agg-function.enabled'

This closes #21789
---
 .../docs/connectors/table/hive/hive_functions.md   | 28 ++
 .../docs/connectors/table/hive/hive_functions.md   | 28 ++
 2 files changed, 56 insertions(+)

diff --git a/docs/content.zh/docs/connectors/table/hive/hive_functions.md 
b/docs/content.zh/docs/connectors/table/hive/hive_functions.md
index da540f9399a..b76331944c1 100644
--- a/docs/content.zh/docs/connectors/table/hive/hive_functions.md
+++ b/docs/content.zh/docs/connectors/table/hive/hive_functions.md
@@ -73,6 +73,34 @@ Some Hive built-in functions in older versions have [thread 
safety issues](https
 We recommend users patch their own Hive to fix them.
 {{< /hint >}}
 
+## Use Native Hive Aggregate Functions
+
+If [HiveModule]({{< ref "docs/dev/table/modules" >}}#hivemodule) is loaded 
with a higher priority than CoreModule, Flink will try to use the Hive built-in 
function first. And then for Hive built-in aggregation functions,
+Flink can only use the sort-based aggregation operator now. From Flink 1.17, 
we have introduced some native hive aggregation functions, which can be 
executed using the hash-based aggregation operator.
+Currently, only five functions are supported, namely sum/count/avg/min/max, 
and more aggregation functions will be supported in the future. Users can use 
the native aggregation function by turning on
+the option `table.exec.hive.native-agg-function.enabled`, which brings 
significant performance improvement to the job.
+
+
+  
+
+Key
+Default
+Type
+Description
+
+  
+  
+
+table.exec.hive.native-agg-function.enabled
+false
+Boolean
+Enabling to use native aggregation functions, hash-based 
aggregation strategy could be used that can improve the aggregation 
performance. This is a job-level option.
+
+  
+
+
+Attention The ability of the native 
aggregation functions doesn't fully align with Hive built-in aggregation 
functions now, for example, some data types are not supported. If performance 
is not a bottleneck, you don't need to turn on this option.
+
 ## Hive User Defined Functions
 
 Users can use their existing Hive User Defined Functions in Flink.
diff --git a/docs/content/docs/connectors/table/hive/hive_functions.md 
b/docs/content/docs/connectors/table/hive/hive_functions.md
index e57d27f1804..5cd7950a334 100644
--- a/docs/content/docs/connectors/table/hive/hive_functions.md
+++ b/docs/content/docs/connectors/table/hive/hive_functions.md
@@ -73,6 +73,34 @@ Some Hive built-in functions in older versions have [thread 
safety issues](https
 We recommend users patch their own Hive to fix them.
 {{< /hint >}}
 
+## Use Native Hive Aggregate Functions
+
+If [HiveModule]({{< ref "docs/dev/table/modules" >}}#hivemodule) is loaded 
with a higher priority than CoreModule, Flink will try to use the Hive built-in 
function first. And then for Hive built-in aggregation functions,
+Flink can only use the sort-based aggregation operator now. From Flink 1.17, 
we have introduced some native hive aggregation functions, which can be 
executed using the hash-based aggregation operator.
+Currently, only five functions are supported, namely sum/count/avg/min/max, 
and more aggregation functions will be supported in the future. Users can use 
the native aggregation function by turning on
+the option `table.exec.hive.native-agg-function.enabled`, which brings 
significant performance improvement to the job.
+
+
+  
+
+Key
+Default
+Type
+Description
+
+  
+  
+
+table.exec.hive.native-agg-function.enabled
+false
+Boolean
+Enabling to use native aggregation functions, hash-based 
aggregation strategy could be used that can improve the aggregation 
performance. This is a job-level option.
+
+  
+
+
+Attention The ability of the native 
aggregation functions doesn't fully align with Hive built-in aggregation 
functions now, for example, some data types are not supported. If performance 
is not a bottleneck, you don't need to turn on this option.
+
 ## Hive User Defined Functions
 
 Users can use their existing Hive User Defined Functions in Flink.



[flink-connector-jdbc] branch main updated: [hotfix][docs] Correct the description of lookup.max-retries in the Chinese document

2023-02-20 Thread leonard
This is an automated email from the ASF dual-hosted git repository.

leonard pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-connector-jdbc.git


The following commit(s) were added to refs/heads/main by this push:
 new 58b0eea  [hotfix][docs] Correct the description of lookup.max-retries 
in the Chinese document
58b0eea is described below

commit 58b0eea7286e01c505f2b64f744df7784a9a6273
Author: Jiabao Sun <328226...@qq.com>
AuthorDate: Mon Feb 20 18:37:06 2023 +0800

[hotfix][docs] Correct the description of lookup.max-retries in the Chinese 
document

This closes #23.
---
 docs/content.zh/docs/connectors/table/jdbc.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content.zh/docs/connectors/table/jdbc.md 
b/docs/content.zh/docs/connectors/table/jdbc.md
index 83c8632..326d43c 100644
--- a/docs/content.zh/docs/connectors/table/jdbc.md
+++ b/docs/content.zh/docs/connectors/table/jdbc.md
@@ -246,7 +246,7 @@ ON myTopic.key = MyUserTable.id;
   可选
   3
   Integer
-  查询数据库失败的最大重试时间。
+  查询数据库失败的最大重试次数。
 
 
   sink.buffer-flush.max-rows



[flink-connector-mongodb] branch main updated: [FLINK-31063] Prevent duplicate reading when restoring from a checkpoint

2023-02-20 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-connector-mongodb.git


The following commit(s) were added to refs/heads/main by this push:
 new f67176d  [FLINK-31063] Prevent duplicate reading when restoring from a 
checkpoint
f67176d is described below

commit f67176de5c46ae11e8c791cbd986dab5826646b9
Author: Jiabao Sun 
AuthorDate: Mon Feb 20 18:22:38 2023 +0800

[FLINK-31063] Prevent duplicate reading when restoring from a checkpoint
---
 .../mongodb/source/reader/MongoSourceReader.java   |   8 +-
 .../source/reader/emitter/MongoRecordEmitter.java  |   2 +
 .../reader/split/MongoScanSourceSplitReader.java   |   5 +
 .../mongodb/source/split/MongoScanSourceSplit.java |  23 +++-
 .../source/split/MongoScanSourceSplitState.java|  53 ++
 .../source/split/MongoSourceSplitSerializer.java   |   5 +-
 .../source/split/MongoSourceSplitState.java|  35 +++
 .../mongodb/source/MongoSourceITCase.java  | 116 +++--
 8 files changed, 211 insertions(+), 36 deletions(-)

diff --git 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java
 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java
index 32512cf..5eb6669 100644
--- 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java
+++ 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/MongoSourceReader.java
@@ -24,6 +24,8 @@ import 
org.apache.flink.connector.base.source.reader.SingleThreadMultiplexSource
 import 
org.apache.flink.connector.base.source.reader.fetcher.SingleThreadFetcherManager;
 import org.apache.flink.connector.base.source.reader.splitreader.SplitReader;
 import 
org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue;
+import org.apache.flink.connector.mongodb.source.split.MongoScanSourceSplit;
+import 
org.apache.flink.connector.mongodb.source.split.MongoScanSourceSplitState;
 import org.apache.flink.connector.mongodb.source.split.MongoSourceSplit;
 import org.apache.flink.connector.mongodb.source.split.MongoSourceSplitState;
 
@@ -77,7 +79,11 @@ public class MongoSourceReader
 
 @Override
 protected MongoSourceSplitState initializedState(MongoSourceSplit split) {
-return new MongoSourceSplitState(split);
+if (split instanceof MongoScanSourceSplit) {
+return new MongoScanSourceSplitState((MongoScanSourceSplit) split);
+} else {
+throw new IllegalArgumentException("Unknown split type.");
+}
 }
 
 @Override
diff --git 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/emitter/MongoRecordEmitter.java
 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/emitter/MongoRecordEmitter.java
index a8b7369..ab0d6aa 100644
--- 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/emitter/MongoRecordEmitter.java
+++ 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/emitter/MongoRecordEmitter.java
@@ -47,6 +47,8 @@ public class MongoRecordEmitter
 public void emitRecord(
 BsonDocument document, SourceOutput output, 
MongoSourceSplitState splitState)
 throws Exception {
+// Update current offset.
+splitState.updateOffset(document);
 // Sink the record to source output.
 sourceOutputWrapper.setSourceOutput(output);
 deserializationSchema.deserialize(document, sourceOutputWrapper);
diff --git 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/split/MongoScanSourceSplitReader.java
 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/split/MongoScanSourceSplitReader.java
index ec6c621..4702f94 100644
--- 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/split/MongoScanSourceSplitReader.java
+++ 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/reader/split/MongoScanSourceSplitReader.java
@@ -181,6 +181,11 @@ public class MongoScanSourceSplitReader implements 
MongoSourceSplitReader 0) {
+findIterable.skip(currentSplit.getOffset());
+}
+
 // Push limit down
 if (readerContext.isLimitPushedDown()) {
 findIterable.limit(readerContext.getLimit());
diff --git 
a/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/split/MongoScanSourceSplit.java
 
b/flink-connector-mongodb/src/main/java/org/apache/flink/connector/mongodb/source/split/MongoScanSourceSplit.java
index 5c23b94..c756074 100644
--- 
a/flink-connector-mongodb/src/main/java/org/a

[flink] branch release-1.17 updated: [FLINK-31130][doc] Improve version info shown in the doc of SQL Gateway

2023-02-20 Thread guoweijie
This is an automated email from the ASF dual-hosted git repository.

guoweijie pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new 0b90d3c3843 [FLINK-31130][doc] Improve version info shown in the doc 
of SQL Gateway
0b90d3c3843 is described below

commit 0b90d3c38430a8a53e11cc69f679df3584db2fe4
Author: luoyuxia 
AuthorDate: Mon Feb 20 15:15:59 2023 +0800

[FLINK-31130][doc] Improve version info shown in the doc of SQL Gateway

(cherry picked from commit 804072aef4f512f61f26d1c0c9950250da333f79)
---
 docs/content.zh/docs/dev/table/sql-gateway/overview.md | 2 +-
 docs/content/docs/dev/table/sql-gateway/overview.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/content.zh/docs/dev/table/sql-gateway/overview.md 
b/docs/content.zh/docs/dev/table/sql-gateway/overview.md
index b33af45b632..2f7f2dd68f2 100644
--- a/docs/content.zh/docs/dev/table/sql-gateway/overview.md
+++ b/docs/content.zh/docs/dev/table/sql-gateway/overview.md
@@ -59,7 +59,7 @@ whether the REST Endpoint is available.
 
 ```bash
 $ curl http://localhost:8083/v1/info
-{"productName":"Apache Flink","version":"1.16-SNAPSHOT"}
+{"productName":"Apache Flink","version":"{{< version >}}"}
 ```
 
 ### Running SQL Queries
diff --git a/docs/content/docs/dev/table/sql-gateway/overview.md 
b/docs/content/docs/dev/table/sql-gateway/overview.md
index b4c75ae088c..30f9a39de54 100644
--- a/docs/content/docs/dev/table/sql-gateway/overview.md
+++ b/docs/content/docs/dev/table/sql-gateway/overview.md
@@ -59,7 +59,7 @@ whether the REST Endpoint is available.
 
 ```bash
 $ curl http://localhost:8083/v1/info
-{"productName":"Apache Flink","version":"1.16-SNAPSHOT"}
+{"productName":"Apache Flink","version":"{{< version >}}"}
 ```
 
 ### Running SQL Queries



[flink] branch master updated (9b92a89c185 -> 804072aef4f)

2023-02-20 Thread guoweijie
This is an automated email from the ASF dual-hosted git repository.

guoweijie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 9b92a89c185 [FLINK-30908][Yarn] Fix YarnResourceManagerDrivers 
handling error callbacks after being terminated.
 add 804072aef4f [FLINK-31130][doc] Improve version info shown in the doc 
of SQL Gateway

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/dev/table/sql-gateway/overview.md | 2 +-
 docs/content/docs/dev/table/sql-gateway/overview.md| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)