[jira] [Created] (FLINK-31932) Allow to configure HA on k8s without using service account

2023-04-24 Thread Arkadiusz Gasinski (Jira)
Arkadiusz Gasinski created FLINK-31932:
--

 Summary: Allow to configure HA on k8s without using service account
 Key: FLINK-31932
 URL: https://issues.apache.org/jira/browse/FLINK-31932
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes
Reporter: Arkadiusz Gasinski


I have quite uncommon use case where I'd like to configure job manager's high 
availability on Kubernetes, but without using a service account, but rather a 
combination of username and password for interacting with the k8s' API. The 
company's policy only allows read-only service accounts, and if I want to be 
able to manipulate k8s objects (e.g., ConfigMap creation/modification) I need 
to have a dedicated account with username/password authentication. I have such 
an account, but I wasn't yet able to configure Flink's HA with it. Any advise 
greatly appreciated. Our k8s provider is OpenShift 4.x.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] libenchao commented on a diff in pull request #22362: [FLINK-31544][jdbc-driver] Introduce database metadata for jdbc driver

2023-04-24 Thread via GitHub


libenchao commented on code in PR #22362:
URL: https://github.com/apache/flink/pull/22362#discussion_r1176084042


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkDatabaseMetaData.java:
##
@@ -0,0 +1,402 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.jdbc;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.StatementResult;
+
+import javax.annotation.Nullable;
+
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.jdbc.utils.DatabaseMetaDataUtils.createCatalogsResultSet;
+import static 
org.apache.flink.table.jdbc.utils.DatabaseMetaDataUtils.createSchemasResultSet;
+
+/** Implementation of {@link java.sql.DatabaseMetaData} for flink jdbc driver. 
*/
+public class FlinkDatabaseMetaData extends BaseDatabaseMetaData {
+private final String url;
+private final FlinkConnection connection;
+private final Statement statement;
+private final Executor executor;
+
+@VisibleForTesting
+public FlinkDatabaseMetaData(String url, FlinkConnection connection, 
Statement statement) {
+this.url = url;
+this.connection = connection;
+this.statement = statement;
+this.executor = connection.getExecutor();
+}
+
+@Override
+public ResultSet getCatalogs() throws SQLException {
+try (StatementResult result = catalogs()) {
+return createCatalogsResultSet(statement, result);
+} catch (Exception e) {
+throw new SQLException("Get catalogs fail", e);
+}
+}
+
+private StatementResult catalogs() {
+return executor.executeStatement("SHOW CATALOGS");
+}
+
+@Override
+public ResultSet getSchemas() throws SQLException {
+try {
+String currentCatalog = connection.getCatalog();
+String currentDatabase = connection.getSchema();
+List catalogList = new ArrayList<>();
+Map> catalogSchemaList = new HashMap<>();
+try (StatementResult result = catalogs()) {
+while (result.hasNext()) {
+String catalog = result.next().getString(0).toString();
+getSchemasForCatalog(catalogList, catalogSchemaList, 
catalog, null);
+}
+}
+connection.setCatalog(currentCatalog);
+connection.setSchema(currentDatabase);
+
+return createSchemasResultSet(statement, catalogList, 
catalogSchemaList);
+} catch (Exception e) {
+throw new SQLException("Get schemas fail", e);
+}
+}
+
+private void getSchemasForCatalog(
+List catalogList,
+Map> catalogSchemaList,
+String catalog,
+@Nullable String schemaPattern)
+throws SQLException {
+catalogList.add(catalog);
+connection.setCatalog(catalog);
+
+List schemas = new ArrayList<>();
+try (StatementResult schemaResult = schemas()) {
+while (schemaResult.hasNext()) {
+String schema = schemaResult.next().getString(0).toString();
+if (schemaPattern == null || schema.contains(schemaPattern)) {

Review Comment:
   
[FLIP-297](https://cwiki.apache.org/confluence/display/FLINK/FLIP-297%3A+Improve+Auxiliary+Sql+Statements)
 is going to support `show catalogs like` and `show databases like`, if this is 
not urgent, we may defer it to use FLIP-297, and just leave it unimplemented 
for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] gaoyunhaii commented on pull request #21736: [FLINK-27518][tests] Refactor migration tests to support version update automatically

2023-04-24 Thread via GitHub


gaoyunhaii commented on PR #21736:
URL: https://github.com/apache/flink/pull/21736#issuecomment-1521247112

   Hi @XComp sorry for the long delay due to being heavily occupied in the 
previous weeks, I'll update the PR today and will try to make it being mergable 
in this week. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

datariver closed FLINK-31914.
-
Fix Version/s: 1.15.2
   Resolution: Fixed

相关记录:https://issues.apache.org/jira/browse/FLINK-28250

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: datariver
>Priority: Major
> Fix For: 1.15.2
>
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-25-13-47-25-703.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31887) Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread Dong Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716099#comment-17716099
 ] 

Dong Lin commented on FLINK-31887:
--

Merged to apache/flink-ml master branch 
5b193deedff2aa2be96fbfb5304f812caace9e12.

> Upgrade Flink version of Flink ML to 1.16.1
> ---
>
> Key: FLINK-31887
> URL: https://issues.apache.org/jira/browse/FLINK-31887
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Machine Learning
>Reporter: Jiang Xin
>Assignee: Jiang Xin
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.3.0
>
>
> Upgrade Flink version of Flink ML to 1.16.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31887) Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin closed FLINK-31887.

Resolution: Fixed

> Upgrade Flink version of Flink ML to 1.16.1
> ---
>
> Key: FLINK-31887
> URL: https://issues.apache.org/jira/browse/FLINK-31887
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Machine Learning
>Reporter: Jiang Xin
>Assignee: Jiang Xin
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.3.0
>
>
> Upgrade Flink version of Flink ML to 1.16.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716095#comment-17716095
 ] 

datariver edited comment on FLINK-31914 at 4/25/23 6:18 AM:


[~chouc] Thanks for the reply, I found that the 1.15.2 version has been fixed.


was (Author: JIRAUSER292566):
Thanks for the reply, I found that the 1.15.2 version has been fixed.

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-25-13-47-25-703.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] lindong28 merged pull request #234: [FLINK-31887] Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread via GitHub


lindong28 merged PR #234:
URL: https://github.com/apache/flink-ml/pull/234


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716095#comment-17716095
 ] 

datariver commented on FLINK-31914:
---

Thanks for the reply, I found that the 1.15.2 version has been fixed.

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-25-13-47-25-703.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31898) Flink k8s autoscaler does not work as expected

2023-04-24 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716096#comment-17716096
 ] 

Gyula Fora commented on FLINK-31898:


I am a bit confused here by all the graphs :) Based on some of your graphs the 
data rate fluctuates quite a lot.

But let's focus on the Flink operator side metrics and forget numrecordsin for 
a second.
It would be great to look at TRUE_PROCESSING_RATE, TARGET_DATA_RATE and the 
scale up/ down threshold metrics.
Also for this experiment please remove the 
`kubernetes.operator.job.autoscaler.scale-down.max-factor` config .

Not sure how many kafka partitions you have but pipeline.max-parallelism: "8" 
seems a bit limiting in the possible parallelism settings.
You could try max parallelism 120 and instead 
kubernetes.operator.job.autoscaler.vertex.max-parallelism: "8"

> Flink k8s autoscaler does not work as expected
> --
>
> Key: FLINK-31898
> URL: https://issues.apache.org/jira/browse/FLINK-31898
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Kyungmin Kim
>Priority: Major
> Attachments: image-2023-04-24-10-54-58-083.png, 
> image-2023-04-24-13-27-17-478.png, image-2023-04-24-13-28-15-462.png, 
> image-2023-04-24-13-31-06-420.png, image-2023-04-24-13-41-43-040.png, 
> image-2023-04-24-13-42-40-124.png, image-2023-04-24-13-43-49-431.png, 
> image-2023-04-24-13-44-17-479.png, image-2023-04-24-14-18-12-450.png, 
> image-2023-04-24-16-47-35-697.png
>
>
> Hi I'm using Flink k8s autoscaler to automatically deploy jobs in proper 
> parallelism.
> I was using 1.4 version but I found that it does not scale down properly 
> because TRUE_PROCESSING_RATE becoming NaN when the tasks are idled.
> In the main branch, I checked the code was fixed to set TRUE_PROCESSING_RATE 
> to positive infinity and make scaleFactor to very low value so I'm now 
> experimentally using docker image built with main branch of 
> Flink-k8s-operator repository in my job.
> It now scales down properly but the problem is, it does not converge to the 
> optimal parallelism. It scales down well but it jumps up again to high 
> parallelism. 
>  
> Below is the experimental setup and my figure of parallelism changes result.
>  * about 40 RPS
>  * each task can process 10 TPS (intended throttling)
> !image-2023-04-24-10-54-58-083.png|width=999,height=266!
> Even using default configuration leads to the same result. What can I do 
> more? Thank you.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

datariver updated FLINK-31914:
--
Affects Version/s: (was: 1.16.0)
   (was: 1.17.0)

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-25-13-47-25-703.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

datariver updated FLINK-31914:
--
Attachment: (was: image-2023-04-24-16-11-22-251.png)

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-25-13-47-25-703.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

datariver updated FLINK-31914:
--
Description: 
Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional writing 
will be used. KafkaWriter will create FlinkKafkaInternalProducer in the 
initialization and snapshotState methods, but there is no place to close it. As 
Checkpoints increase, Producers will continue to accumulate. Each Producer 
maintains a Buffer, which will cause memory leaks and Job OOM.
By dumping an in-memory instance of Task Manager, you can see that there are a 
lot of Producers:

!image-2023-04-25-13-47-25-703.png!

  was:
Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional writing 
will be used. KafkaWriter will create FlinkKafkaInternalProducer in the 
initialization and snapshotState methods, but there is no place to close it. As 
Checkpoints increase, Producers will continue to accumulate. Each Producer 
maintains a Buffer, which will cause memory leaks and Job OOM.
By dumping an in-memory instance of Task Manager, you can see that there are a 
lot of Producers:

!image-2023-04-24-16-11-22-251.png!


> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-25-13-47-25-703.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

datariver updated FLINK-31914:
--
Attachment: image-2023-04-25-13-47-25-703.png

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-25-13-47-25-703.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-24-16-11-22-251.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa commented on pull request #22447: [FLINK-31764][runtime] Get rid of numberOfRequestedOverdraftMemorySegments in LocalBufferPool

2023-04-24 Thread via GitHub


reswqa commented on PR #22447:
URL: https://github.com/apache/flink/pull/22447#issuecomment-1521183716

   Thanks @1996fanrui for the quick review, I have updated this in a fix-up 
commit, please take a look~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa commented on a diff in pull request #22447: [FLINK-31764][runtime] Get rid of numberOfRequestedOverdraftMemorySegments in LocalBufferPool

2023-04-24 Thread via GitHub


reswqa commented on code in PR #22447:
URL: https://github.com/apache/flink/pull/22447#discussion_r1176029546


##
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPoolTest.java:
##
@@ -878,6 +874,18 @@ private void assertRequestedBufferAndIsAvailable(
 // Helpers
 // 
 
+private static int getNumberRequestedOverdraftBuffers(LocalBufferPool 
bufferPool) {
+return bufferPool.getNumberOfRequestedMemorySegments() > 
bufferPool.getNumBuffers()
+? bufferPool.getNumberOfRequestedMemorySegments() - 
bufferPool.getNumBuffers()
+: 0;

Review Comment:
   Good suggestion! 👍 



##
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPoolTest.java:
##
@@ -878,6 +874,18 @@ private void assertRequestedBufferAndIsAvailable(
 // Helpers
 // 
 
+private static int getNumberRequestedOverdraftBuffers(LocalBufferPool 
bufferPool) {
+return bufferPool.getNumberOfRequestedMemorySegments() > 
bufferPool.getNumBuffers()
+? bufferPool.getNumberOfRequestedMemorySegments() - 
bufferPool.getNumBuffers()
+: 0;
+}
+
+private static int getNumberRequestedOrdinaryBuffers(LocalBufferPool 
bufferPool) {
+return bufferPool.getNumberOfRequestedMemorySegments() > 
bufferPool.getNumBuffers()
+? bufferPool.getNumBuffers()
+: bufferPool.getNumberOfRequestedMemorySegments();

Review Comment:
   Good suggestion! 👍
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa commented on a diff in pull request #22445: [FLINK-31876][QS] Migrate flink-queryable-state-client tests to JUnit5

2023-04-24 Thread via GitHub


reswqa commented on code in PR #22445:
URL: https://github.com/apache/flink/pull/22445#discussion_r1176019508


##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableAggregatingStateTest.java:
##
@@ -53,20 +54,28 @@ public void setUp() throws Exception {
 aggrState = ImmutableAggregatingState.createState(aggrStateDesc, 
out.toByteArray());
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testUpdate() throws Exception {
-String value = aggrState.get();
-assertEquals("42", value);
+@Test
+void testUpdate() {
+assertThatThrownBy(
+() -> {
+String value = aggrState.get();
+assertEquals("42", value);

Review Comment:
   All assertions should be migrated to AssertJ.



##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableAggregatingStateTest.java:
##
@@ -53,20 +54,28 @@ public void setUp() throws Exception {
 aggrState = ImmutableAggregatingState.createState(aggrStateDesc, 
out.toByteArray());
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testUpdate() throws Exception {
-String value = aggrState.get();
-assertEquals("42", value);
+@Test
+void testUpdate() {
+assertThatThrownBy(
+() -> {
+String value = aggrState.get();
+assertEquals("42", value);
 
-aggrState.add(54L);
+aggrState.add(54L);

Review Comment:
   We should make the code block of `assertThatThrownBy` as small as possible, 
ideally it only contains codes that will throw errors.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31748) Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-31748:
---
Affects Version/s: pulsar-4.0.0

> Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar
> 
>
> Key: FLINK-31748
> URL: https://issues.apache.org/jira/browse/FLINK-31748
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: pulsar-4.0.0
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: pulsar-4.0.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31748) Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-31748:
---
Component/s: Connectors / Pulsar

> Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar
> 
>
> Key: FLINK-31748
> URL: https://issues.apache.org/jira/browse/FLINK-31748
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: pulsar-4.0.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31748) Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-31748.
--
Fix Version/s: pulsar-4.0.1
   Resolution: Fixed

> Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar
> 
>
> Key: FLINK-31748
> URL: https://issues.apache.org/jira/browse/FLINK-31748
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: pulsar-4.0.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31748) Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar

2023-04-24 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716082#comment-17716082
 ] 

Weijie Guo commented on FLINK-31748:


Temporarily fixed in main via bbb636a433ded42f61b1e54811d046b590c4d514.

Feel free to reopen this for further fixing. 

> Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar
> 
>
> Key: FLINK-31748
> URL: https://issues.apache.org/jira/browse/FLINK-31748
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-24785) Relocate RocksDB's log under flink log directory by default

2023-04-24 Thread jinghaihang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17707083#comment-17707083
 ] 

jinghaihang edited comment on FLINK-24785 at 4/25/23 4:44 AM:
--

!image-2023-04-25-12-43-57-488.png|width=820,height=296!


was (Author: assassinj):
!https://km.sankuai.com/api/file/cdn/1613529928/31761470454?contentType=1&isNewContent=false|width=812,height=350!

> Relocate RocksDB's log under flink log directory by default
> ---
>
> Key: FLINK-24785
> URL: https://issues.apache.org/jira/browse/FLINK-24785
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
> Attachments: image-2023-04-25-12-43-57-488.png
>
>
> Previously, RocksDB's log locates at its own DB folder, which makes the 
> debuging RocksDB not so easy. We could let RocksDB's log stay in Flink's log 
> directory by default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31748) Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-31748:
--

Assignee: Zili Chen  (was: Yufan Sheng)

> Adapt SplitFetcherManager#removeSplit for flink-connector-pulsar
> 
>
> Key: FLINK-31748
> URL: https://issues.apache.org/jira/browse/FLINK-31748
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31918) Pulsar Source does not failing build against Flink 1.18 on nightly CI

2023-04-24 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716077#comment-17716077
 ] 

Weijie Guo commented on FLINK-31918:


This compile problem should already be fixed in 
https://github.com/apache/flink-connector-pulsar/pull/43. 

As for the status of CI has not become failed, it needs to be resolved.

> Pulsar Source does not failing build against Flink 1.18 on nightly CI
> -
>
> Key: FLINK-31918
> URL: https://issues.apache.org/jira/browse/FLINK-31918
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Reporter: Danny Cranmer
>Priority: Major
>
> [https://github.com/apache/flink-connector-pulsar/actions/runs/4783897408/jobs/8504710249]
>  
> {{Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project flink-connector-pulsar: Compilation failure }}
> {{[150|https://github.com/apache/flink-connector-pulsar/actions/runs/4783897408/jobs/8504710249#step:13:151]Error:
>   
> /home/runner/work/flink-connector-pulsar/flink-connector-pulsar/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/reader/PulsarSourceFetcherManager.java:[52,8]
>  org.apache.flink.connector.pulsar.source.reader.PulsarSourceFetcherManager 
> is not abstract and does not override abstract method 
> removeSplits(java.util.List)
>  in 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31560) Savepoint failing to complete with ExternallyInducedSources

2023-04-24 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei reassigned FLINK-31560:
--

Assignee: Yanfei Lei

> Savepoint failing to complete with ExternallyInducedSources
> ---
>
> Key: FLINK-31560
> URL: https://issues.apache.org/jira/browse/FLINK-31560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Fan Yang
>Assignee: Yanfei Lei
>Priority: Major
> Attachments: image-2023-03-23-18-03-05-943.png, 
> image-2023-03-23-18-19-24-482.png, jobmanager_log.txt, 
> taskmanager_172.28.17.19_6123-f2dbff_log, 
> tmp_tm_172.28.17.19_6123-f2dbff_tmp_job_83ad4f408d0e7bf30f940ddfa5fe00e3_op_WindowOperator_137df028a798f504a6900a4081c9990c__1_1__uuid_edc681f0-3825-45ce-a123-9ff69ce6d8f1_db_LOG
>
>
> Flink version: 1.16.0
>  
> We are using Flink to run some streaming applications with Pravega as source 
> and use window and reduce transformations. We use RocksDB state backend with 
> incremental checkpointing enabled. We don't enable the latest changelog state 
> backend.
> When we try to stop the job, we encounter issues with the savepoint failing 
> to complete for the job. This happens most of the time. On rare occasions, 
> the job gets canceled suddenly with its savepoint get completed successfully.
> Savepointing shows below error:
>  
> {code:java}
> 2023-03-22 08:55:57,521 [jobmanager-io-thread-1] WARN  
> org.apache.flink.runtime.checkpoint.CheckpointFailureManager [] - Failed to 
> trigger or complete checkpoint 189 for job 7354442cd6f7c121249360680c04284d. 
> (0 consecutive failed attempts so 
> far)org.apache.flink.runtime.checkpoint.CheckpointException: Failure to 
> finalize checkpoint.    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.finalizeCheckpoint(CheckpointCoordinator.java:1375)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1265)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1157)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$acknowledgeCheckpoint$1(ExecutionGraphHandler.java:89)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$processCheckpointCoordinatorMessage$3(ExecutionGraphHandler.java:119)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]    at java.lang.Thread.run(Thread.java:829) [?:?]
> Caused by: java.io.IOException: Unknown implementation of StreamStateHandle: 
> class org.apache.flink.runtime.state.PlaceholderStreamStateHandle    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandle(MetadataV2V3SerializerBase.java:699)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandleMap(MetadataV2V3SerializerBase.java:813)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeKeyedStateHandle(MetadataV2V3SerializerBase.java:344)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeKeyedStateCol(MetadataV2V3SerializerBase.java:269)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeSubtaskState(MetadataV2V3SerializerBase.java:262)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV3Serializer.serializeSubtaskState(MetadataV3Serializer.java:142)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV3Serializer.serializeOperatorState(MetadataV3Serializer.java:122)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeMetadata(MetadataV2V3SerializerBase.java:146)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV3Serializer.serialize(MetadataV3Serializer.java:83)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV4Serializer.serialize(MetadataV4Serializer.java:56)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.Checkpoints.storeCheckpointMetadata(Checkpoints.java:100)
>  ~[flink-dist-1.16.0.jar:1.16.0]   

[jira] [Commented] (FLINK-31560) Savepoint failing to complete with ExternallyInducedSources

2023-04-24 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716075#comment-17716075
 ] 

Yanfei Lei commented on FLINK-31560:


[~fyang86] After reading the 
[code|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java#L136]
 of the legacy source, I found that  FLINK-25256 only focuses on the interface 
of FLIP-27, I will open a pr to fix this. 

> Savepoint failing to complete with ExternallyInducedSources
> ---
>
> Key: FLINK-31560
> URL: https://issues.apache.org/jira/browse/FLINK-31560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Fan Yang
>Priority: Major
> Attachments: image-2023-03-23-18-03-05-943.png, 
> image-2023-03-23-18-19-24-482.png, jobmanager_log.txt, 
> taskmanager_172.28.17.19_6123-f2dbff_log, 
> tmp_tm_172.28.17.19_6123-f2dbff_tmp_job_83ad4f408d0e7bf30f940ddfa5fe00e3_op_WindowOperator_137df028a798f504a6900a4081c9990c__1_1__uuid_edc681f0-3825-45ce-a123-9ff69ce6d8f1_db_LOG
>
>
> Flink version: 1.16.0
>  
> We are using Flink to run some streaming applications with Pravega as source 
> and use window and reduce transformations. We use RocksDB state backend with 
> incremental checkpointing enabled. We don't enable the latest changelog state 
> backend.
> When we try to stop the job, we encounter issues with the savepoint failing 
> to complete for the job. This happens most of the time. On rare occasions, 
> the job gets canceled suddenly with its savepoint get completed successfully.
> Savepointing shows below error:
>  
> {code:java}
> 2023-03-22 08:55:57,521 [jobmanager-io-thread-1] WARN  
> org.apache.flink.runtime.checkpoint.CheckpointFailureManager [] - Failed to 
> trigger or complete checkpoint 189 for job 7354442cd6f7c121249360680c04284d. 
> (0 consecutive failed attempts so 
> far)org.apache.flink.runtime.checkpoint.CheckpointException: Failure to 
> finalize checkpoint.    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.finalizeCheckpoint(CheckpointCoordinator.java:1375)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1265)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1157)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$acknowledgeCheckpoint$1(ExecutionGraphHandler.java:89)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$processCheckpointCoordinatorMessage$3(ExecutionGraphHandler.java:119)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]    at java.lang.Thread.run(Thread.java:829) [?:?]
> Caused by: java.io.IOException: Unknown implementation of StreamStateHandle: 
> class org.apache.flink.runtime.state.PlaceholderStreamStateHandle    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandle(MetadataV2V3SerializerBase.java:699)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandleMap(MetadataV2V3SerializerBase.java:813)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeKeyedStateHandle(MetadataV2V3SerializerBase.java:344)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeKeyedStateCol(MetadataV2V3SerializerBase.java:269)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeSubtaskState(MetadataV2V3SerializerBase.java:262)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV3Serializer.serializeSubtaskState(MetadataV3Serializer.java:142)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV3Serializer.serializeOperatorState(MetadataV3Serializer.java:122)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeMetadata(MetadataV2V3SerializerBase.java:146)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV3Serializer.serialize(MetadataV3Serializer.java:83)
>  ~[flink-dist-1.16.0.jar

[jira] [Commented] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s with IPv6 stack

2023-04-24 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716074#comment-17716074
 ] 

Weijie Guo commented on FLINK-31929:


Thanks [~caiyi], would you mind submitting a pull request and adding some tests 
for this?

> HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s 
> with IPv6 stack
> 
>
> Key: FLINK-31929
> URL: https://issues.apache.org/jira/browse/FLINK-31929
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
> Environment: K8s with IPv6 stack
>Reporter: caiyi
>Priority: Blocker
> Attachments: 1.jpg
>
>
> As attachment below, String.format works not properly if address is IPv6, 
> new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31931) Exception history page should not link to a non-existent TM log page.

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-31931:
--

Assignee: Junrui Li

> Exception history page should not link to a non-existent TM log page.
> -
>
> Key: FLINK-31931
> URL: https://issues.apache.org/jira/browse/FLINK-31931
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
> Fix For: 1.18.0
>
>
> In FLINK-30358, we supported to show the task manager ID on the exception 
> history page and added a link to the task manager ID to jump to the task 
> manager page. However, if the task manager no longer exists when clicking the 
> link to jump, the page will continue to load and the following error log will 
> be continuously printed in the JM log. This will trouble users, and should be 
> optimized.
> {code:java}
> 2023-04-25 11:40:50,109 [flink-akka.actor.default-dispatcher-35] ERROR 
> org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerDetailsHandler 
> [] - Unhandled exception.
> org.apache.flink.runtime.resourcemanager.exceptions.UnknownTaskExecutorException:
>  No TaskExecutor registered under container_01.
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManager.requestTaskManagerDetailsInfo(ResourceManager.java:697)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source) ~[?:?]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_362]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
>  ~[?:?]
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
>   at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
> ~[?:?]
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[?:?]
>   at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[?:?]
>   at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) ~[?:?]
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579) ~[?:?]
>   at akka.actor.ActorCell.invoke(ActorCell.scala:547) ~[?:?]
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) ~[?:?]
>   at akka.dispatch.Mailbox.run(Mailbox.scala:231) ~[?:?]
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:243) ~[?:?]
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
> [?:1.8.0_362]
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
> [?:1.8.0_362]
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
> [?:1.8.0_362]
>   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
> [?:1.8.0_362]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31931) Exception history page should not link to a non-existent TM log page.

2023-04-24 Thread Junrui Li (Jira)
Junrui Li created FLINK-31931:
-

 Summary: Exception history page should not link to a non-existent 
TM log page.
 Key: FLINK-31931
 URL: https://issues.apache.org/jira/browse/FLINK-31931
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Reporter: Junrui Li
 Fix For: 1.18.0


In FLINK-30358, we supported to show the task manager ID on the exception 
history page and added a link to the task manager ID to jump to the task 
manager page. However, if the task manager no longer exists when clicking the 
link to jump, the page will continue to load and the following error log will 
be continuously printed in the JM log. This will trouble users, and should be 
optimized.
{code:java}
2023-04-25 11:40:50,109 [flink-akka.actor.default-dispatcher-35] ERROR 
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerDetailsHandler [] 
- Unhandled exception.
org.apache.flink.runtime.resourcemanager.exceptions.UnknownTaskExecutorException:
 No TaskExecutor registered under container_01.
  at 
org.apache.flink.runtime.resourcemanager.ResourceManager.requestTaskManagerDetailsInfo(ResourceManager.java:697)
 ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
  at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source) ~[?:?]
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_362]
  at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
  at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309)
 ~[?:?]
  at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
 ~[?:?]
  at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307)
 ~[?:?]
  at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
 ~[?:?]
  at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
 ~[?:?]
  at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
 ~[?:?]
  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
  at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
  at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
  at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]
  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
  at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[?:?]
  at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[?:?]
  at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) ~[?:?]
  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579) ~[?:?]
  at akka.actor.ActorCell.invoke(ActorCell.scala:547) ~[?:?]
  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) ~[?:?]
  at akka.dispatch.Mailbox.run(Mailbox.scala:231) ~[?:?]
  at akka.dispatch.Mailbox.exec(Mailbox.scala:243) ~[?:?]
  at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
[?:1.8.0_362]
  at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
[?:1.8.0_362]
  at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
[?:1.8.0_362]
  at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
[?:1.8.0_362]
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22481: [FLINK-27805] bump orc version to 1.7.8

2023-04-24 Thread via GitHub


flinkbot commented on PR #22481:
URL: https://github.com/apache/flink/pull/22481#issuecomment-1521126055

   
   ## CI report:
   
   * 8a31508bb40da5f27e122d679175d8d545e6b38e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31887) Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin reassigned FLINK-31887:


Assignee: Jiang Xin

> Upgrade Flink version of Flink ML to 1.16.1
> ---
>
> Key: FLINK-31887
> URL: https://issues.apache.org/jira/browse/FLINK-31887
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Machine Learning
>Reporter: Jiang Xin
>Assignee: Jiang Xin
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.3.0
>
>
> Upgrade Flink version of Flink ML to 1.16.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] lindong28 commented on pull request #234: [FLINK-31887] Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread via GitHub


lindong28 commented on PR #234:
URL: https://github.com/apache/flink-ml/pull/234#issuecomment-1521123703

   Thanks for the update. LGTM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27805) Bump ORC version to 1.7.8

2023-04-24 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-27805:
---
Summary: Bump ORC version to 1.7.8  (was: Bump ORC version to 1.7.5)

> Bump ORC version to 1.7.8
> -
>
> Key: FLINK-27805
> URL: https://issues.apache.org/jira/browse/FLINK-27805
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: jia liu
>Assignee: jia liu
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> The current ORC dependency version of flink is 1.5.6, but the latest ORC 
> version 1.7.x has been released for a long time.
> In order to use these new features (zstd compression, column encryption 
> etc.), we should upgrade the orc version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] pgaref opened a new pull request, #22481: [FLINK-27805] bump orc version to 1.7.8

2023-04-24 Thread via GitHub


pgaref opened a new pull request, #22481:
URL: https://github.com/apache/flink/pull/22481

   Bump to orc 1.7.8  -- [Release Notes](https://orc.apache.org/news/releases/)
   ORC now supports writers with FSDataOutputStream (instead of just paths 
previously) so cleaning NoHivePhysicalWriterImpl and PhysicalWriterImpl


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on a diff in pull request #22447: [FLINK-31764][runtime] Get rid of numberOfRequestedOverdraftMemorySegments in LocalBufferPool

2023-04-24 Thread via GitHub


1996fanrui commented on code in PR #22447:
URL: https://github.com/apache/flink/pull/22447#discussion_r1175969922


##
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPoolTest.java:
##
@@ -878,6 +874,18 @@ private void assertRequestedBufferAndIsAvailable(
 // Helpers
 // 
 
+private static int getNumberRequestedOverdraftBuffers(LocalBufferPool 
bufferPool) {
+return bufferPool.getNumberOfRequestedMemorySegments() > 
bufferPool.getNumBuffers()
+? bufferPool.getNumberOfRequestedMemorySegments() - 
bufferPool.getNumBuffers()
+: 0;

Review Comment:
   How about this?
   
   ```suggestion
   return Math.max(bufferPool.getNumberOfRequestedMemorySegments() - 
bufferPool.getNumBuffers(), 0);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPoolTest.java:
##
@@ -878,6 +874,18 @@ private void assertRequestedBufferAndIsAvailable(
 // Helpers
 // 
 
+private static int getNumberRequestedOverdraftBuffers(LocalBufferPool 
bufferPool) {
+return bufferPool.getNumberOfRequestedMemorySegments() > 
bufferPool.getNumBuffers()
+? bufferPool.getNumberOfRequestedMemorySegments() - 
bufferPool.getNumBuffers()
+: 0;
+}
+
+private static int getNumberRequestedOrdinaryBuffers(LocalBufferPool 
bufferPool) {
+return bufferPool.getNumberOfRequestedMemorySegments() > 
bufferPool.getNumBuffers()
+? bufferPool.getNumBuffers()
+: bufferPool.getNumberOfRequestedMemorySegments();

Review Comment:
   ```suggestion
   return Math.min(bufferPool.getNumBuffers(), 
bufferPool.getNumberOfRequestedMemorySegments());
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] jiangxin369 commented on pull request #234: [FLINK-31887] Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread via GitHub


jiangxin369 commented on PR #234:
URL: https://github.com/apache/flink-ml/pull/234#issuecomment-1521110530

   @lindong28 Could you have another look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] jiangxin369 commented on a diff in pull request #234: [FLINK-31887] Upgrade Flink version of Flink ML to 1.16.1

2023-04-24 Thread via GitHub


jiangxin369 commented on code in PR #234:
URL: https://github.com/apache/flink-ml/pull/234#discussion_r1175973954


##
flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/OperatorUtils.java:
##
@@ -136,31 +135,20 @@ public static StreamConfig 
createWrappedOperatorConfig(StreamConfig config, Clas
 wrappedConfig.setTypeSerializerOut(
 ((IterationRecordSerializer) 
typeSerializerOut).getInnerSerializer());
 
-Stream.concat(
-config.getChainedOutputs(cl).stream(),
-config.getNonChainedOutputs(cl).stream())
+config.getChainedOutputs(cl)
 .forEach(
-edge -> {
-OutputTag outputTag = edge.getOutputTag();
-if (outputTag != null) {
-TypeSerializer typeSerializerSideOut =
-
config.getTypeSerializerSideOut(outputTag, cl);
-checkState(
-typeSerializerSideOut instanceof 
IterationRecordSerializer,
-"The serializer of side output with 
tag[%s] should be IterationRecordSerializer but it is %s.",
-outputTag,
-typeSerializerSideOut);
-wrappedConfig.setTypeSerializerSideOut(
-new OutputTag<>(
-outputTag.getId(),
-((IterationRecordTypeInfo)
-
outputTag.getTypeInfo())
-.getInnerTypeInfo()),
-((IterationRecordSerializer) 
typeSerializerSideOut)
-.getInnerSerializer());
-}
+chainedOutput -> {
+OutputTag outputTag = 
chainedOutput.getOutputTag();
+setTypeSerializerSideOut(outputTag, config, 
wrappedConfig, cl);
+});
+config.getOperatorNonChainedOutputs(cl)

Review Comment:
   The reason why I separate these two parts is `config.getChainedOutputs(cl)` 
returns `List` while `config.getOperatorNonChainedOutputs(cl)` 
returns `List`, they are of different types and are not 
suitable to iterate in the same loop.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22480: [FLINK-31897][Backport 1.16] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-24 Thread via GitHub


flinkbot commented on PR #22480:
URL: https://github.com/apache/flink/pull/22480#issuecomment-1521102621

   
   ## CI report:
   
   * 64c595bd2cab5baaed949d7702500f800e2c9f1d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22479: [FLINK-31897][Backport 1.17] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-24 Thread via GitHub


flinkbot commented on PR #22479:
URL: https://github.com/apache/flink/pull/22479#issuecomment-1521100027

   
   ## CI report:
   
   * bb255f2fe9e9341ded3851bdc51816ea54ca8783 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx opened a new pull request, #22480: [FLINK-31897][Backport 1.16] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-24 Thread via GitHub


TanYuxin-tyx opened a new pull request, #22480:
URL: https://github.com/apache/flink/pull/22480

   
   
   ## What is the purpose of the change
   
   Bach port to 1.16, the PR has been reviewed at 
https://github.com/apache/flink/pull/22471.
   
   Fix the unstable test ClientTest#testRequestUnavailableHost.
   
   
   ## Brief change log
   
 - *Fix the unstable test ClientTest#testRequestUnavailableHost.*
   
   
   ## Verifying this change
   
   This change is a test fix without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx opened a new pull request, #22479: [FLINK-31897][Backport 1.17] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-24 Thread via GitHub


TanYuxin-tyx opened a new pull request, #22479:
URL: https://github.com/apache/flink/pull/22479

   
   
   ## What is the purpose of the change
   
   Bach port to 1.17, the PR has been reviewed at 
https://github.com/apache/flink/pull/22471.
   
   Fix the unstable test ClientTest#testRequestUnavailableHost.
   
   
   ## Brief change log
   
 - *Fix the unstable test ClientTest#testRequestUnavailableHost.*
   
   
   ## Verifying this change
   
   This change is a test fix without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31907) Remove unused fields inside of ExecutionSlotSharingGroupBuilder

2023-04-24 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan closed FLINK-31907.
---
Fix Version/s: 1.18.0
   Resolution: Fixed

> Remove unused fields inside of ExecutionSlotSharingGroupBuilder
> ---
>
> Key: FLINK-31907
> URL: https://issues.apache.org/jira/browse/FLINK-31907
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> FLINK-22767 introduced `availableGroupsForJobVertex` to improve the 
> performance during task to slot scheduler.
> After FLINK-22767, the `executionSlotSharingGroups`[2] is unused, and it can 
> be removed.
>  
> [1] 
> https://github.com/apache/flink/blob/f9b3e0b7bc0432001b4a197539a0712b16e0b33b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/LocalInputPreferredSlotSharingStrategy.java#L153
> [2] 
> https://github.com/apache/flink/blob/f9b3e0b7bc0432001b4a197539a0712b16e0b33b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/LocalInputPreferredSlotSharingStrategy.java#L136



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31907) Remove unused fields inside of ExecutionSlotSharingGroupBuilder

2023-04-24 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716069#comment-17716069
 ] 

Rui Fan commented on FLINK-31907:
-

Thanks [~wanglijie] for the review.

master(1.18) via ed42b094496ffaa503d3add44a91fad5669c9dd7

> Remove unused fields inside of ExecutionSlotSharingGroupBuilder
> ---
>
> Key: FLINK-31907
> URL: https://issues.apache.org/jira/browse/FLINK-31907
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-22767 introduced `availableGroupsForJobVertex` to improve the 
> performance during task to slot scheduler.
> After FLINK-22767, the `executionSlotSharingGroups`[2] is unused, and it can 
> be removed.
>  
> [1] 
> https://github.com/apache/flink/blob/f9b3e0b7bc0432001b4a197539a0712b16e0b33b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/LocalInputPreferredSlotSharingStrategy.java#L153
> [2] 
> https://github.com/apache/flink/blob/f9b3e0b7bc0432001b4a197539a0712b16e0b33b/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/LocalInputPreferredSlotSharingStrategy.java#L136



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui merged pull request #22472: [FLINK-31907][coordination] Remove the unused field inside ExecutionSlotSharingGroupBuilder

2023-04-24 Thread via GitHub


1996fanrui merged PR #22472:
URL: https://github.com/apache/flink/pull/22472


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #22472: [FLINK-31907][coordination] Remove the unused field inside ExecutionSlotSharingGroupBuilder

2023-04-24 Thread via GitHub


1996fanrui commented on PR #22472:
URL: https://github.com/apache/flink/pull/22472#issuecomment-1521094800

   Hi @wanglijie95 , thanks for your review! Merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-04-24 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716062#comment-17716062
 ] 

jackylau commented on FLINK-31908:
--

hi [~jark] i will fix in calcite first here 
https://issues.apache.org/jira/browse/CALCITE-5674

> cast expr to type with not null  should not change nullable of expr
> ---
>
> Key: FLINK-31908
> URL: https://issues.apache.org/jira/browse/FLINK-31908
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: jackylau
>Priority: Major
>
> {code:java}
> Stream getTestSetSpecs() {
> return Stream.of(
> TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
> .onFieldsWithData(new Integer[]{1, 2}, 3)
> .andDataTypes(DataTypes.ARRAY(INT()), INT())
> .testSqlResult(
> "CAST(f0 AS ARRAY)",
> new Double[]{1.0d, 2.0d},
> DataTypes.ARRAY(DOUBLE().notNull(;
> } {code}
> but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
> calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31900) Fix some typo in java doc, comments and assertion message

2023-04-24 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716063#comment-17716063
 ] 

Feifan Wang commented on FLINK-31900:
-

Thanks [~Weijie Guo] for review and merge the PR !

> Fix some typo in java doc, comments and assertion message
> -
>
> Key: FLINK-31900
> URL: https://issues.apache.org/jira/browse/FLINK-31900
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Feifan Wang
>Assignee: Feifan Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> As the title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31900) Fix some typo in java doc, comments and assertion message

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-31900.
--
Fix Version/s: 1.18.0
   Resolution: Fixed

master(1.18) via e21de2dbaeb6b624b3c1f9e5c204743d81841a86.

> Fix some typo in java doc, comments and assertion message
> -
>
> Key: FLINK-31900
> URL: https://issues.apache.org/jira/browse/FLINK-31900
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Feifan Wang
>Assignee: Feifan Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> As the title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31900) Fix some typo in java doc, comments and assertion message

2023-04-24 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-31900:
--

Assignee: Feifan Wang

> Fix some typo in java doc, comments and assertion message
> -
>
> Key: FLINK-31900
> URL: https://issues.apache.org/jira/browse/FLINK-31900
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Feifan Wang
>Assignee: Feifan Wang
>Priority: Minor
>  Labels: pull-request-available
>
> As the title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa merged pull request #22470: [FLINK-31900] Fix some typo in java doc, comments and assertion message

2023-04-24 Thread via GitHub


reswqa merged PR #22470:
URL: https://github.com/apache/flink/pull/22470


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] FangYongs commented on a diff in pull request #22362: [FLINK-31544][jdbc-driver] Introduce database metadata for jdbc driver

2023-04-24 Thread via GitHub


FangYongs commented on code in PR #22362:
URL: https://github.com/apache/flink/pull/22362#discussion_r1175955153


##
flink-table/flink-sql-jdbc-driver/src/main/java/org/apache/flink/table/jdbc/FlinkDatabaseMetaData.java:
##
@@ -0,0 +1,402 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.jdbc;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.StatementResult;
+
+import javax.annotation.Nullable;
+
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.table.jdbc.utils.DatabaseMetaDataUtils.createCatalogsResultSet;
+import static 
org.apache.flink.table.jdbc.utils.DatabaseMetaDataUtils.createSchemasResultSet;
+
+/** Implementation of {@link java.sql.DatabaseMetaData} for flink jdbc driver. 
*/
+public class FlinkDatabaseMetaData extends BaseDatabaseMetaData {
+private final String url;
+private final FlinkConnection connection;
+private final Statement statement;
+private final Executor executor;
+
+@VisibleForTesting
+public FlinkDatabaseMetaData(String url, FlinkConnection connection, 
Statement statement) {
+this.url = url;
+this.connection = connection;
+this.statement = statement;
+this.executor = connection.getExecutor();
+}
+
+@Override
+public ResultSet getCatalogs() throws SQLException {
+try (StatementResult result = catalogs()) {
+return createCatalogsResultSet(statement, result);
+} catch (Exception e) {
+throw new SQLException("Get catalogs fail", e);
+}
+}
+
+private StatementResult catalogs() {
+return executor.executeStatement("SHOW CATALOGS");
+}
+
+@Override
+public ResultSet getSchemas() throws SQLException {
+try {
+String currentCatalog = connection.getCatalog();
+String currentDatabase = connection.getSchema();
+List catalogList = new ArrayList<>();
+Map> catalogSchemaList = new HashMap<>();
+try (StatementResult result = catalogs()) {
+while (result.hasNext()) {
+String catalog = result.next().getString(0).toString();
+getSchemasForCatalog(catalogList, catalogSchemaList, 
catalog, null);
+}
+}
+connection.setCatalog(currentCatalog);
+connection.setSchema(currentDatabase);
+
+return createSchemasResultSet(statement, catalogList, 
catalogSchemaList);
+} catch (Exception e) {
+throw new SQLException("Get schemas fail", e);
+}
+}
+
+private void getSchemasForCatalog(
+List catalogList,
+Map> catalogSchemaList,
+String catalog,
+@Nullable String schemaPattern)
+throws SQLException {
+catalogList.add(catalog);
+connection.setCatalog(catalog);
+
+List schemas = new ArrayList<>();
+try (StatementResult schemaResult = schemas()) {
+while (schemaResult.hasNext()) {
+String schema = schemaResult.next().getString(0).toString();
+if (schemaPattern == null || schema.contains(schemaPattern)) {

Review Comment:
   Get it, I check the implementation in trino jdbc driver, the `schemaPattern` 
will be used in like. I will update it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175948882


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/GlueCatalog.java:
##
@@ -0,0 +1,1073 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.connector.aws.config.AWSConfig;
+import org.apache.flink.connector.aws.util.AwsClientFactories;
+import org.apache.flink.table.catalog.AbstractCatalog;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionImpl;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.ResolvedCatalogTable;
+import org.apache.flink.table.catalog.ResolvedCatalogView;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionNotExistException;
+import 
org.apache.flink.table.catalog.exceptions.PartitionAlreadyExistsException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException;
+import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.exceptions.TablePartitionedException;
+import org.apache.flink.table.catalog.glue.util.GlueOperator;
+import org.apache.flink.table.catalog.glue.util.GlueUtils;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.functions.FunctionIdentifier;
+import org.apache.flink.util.StringUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.Partition;
+import software.amazon.awssdk.services.glue.model.Table;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** A Glue catalog implementation that uses glue catalog. */
+@PublicEvolving
+public class GlueCatalog extends AbstractCatalog {
+
+/** instance of GlueOperator to facilitate glue related actions. */
+public GlueOperator glueOperator;
+
+/** Default database name if not passed as part of catalog. */
+public static final String DEFAULT_DB = "default";
+
+private static final Logger LOG = 
LoggerFactory.getLogger(GlueCatalog.class);
+
+public GlueCatalog(String catalogName, String databaseName, ReadableConfig 
catalogConfig) {
+super(catalogName, databaseName);
+checkNotNull(catalogConfig, "Catalog config cannot be null.");
+AWSConfig awsConfig = new AWSConfig(catalogConfig);
+GlueClient glueClient = AwsClientFactories.factory(awsConfig).glue();
+String catalogPath = catalogConfig.get(GlueCatalogOptions.PATH);
+this.glueOperator = new GlueOperator(getName(), catalogPath, 
awsConfig, glueClient);
+}
+
+@Visible

[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175946496


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/util/GlueOperator.java:
##
@@ -0,0 +1,1359 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.util;
+
+import org.apache.flink.connector.aws.config.AWSConfig;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogFunctionImpl;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.glue.GlueCatalogConstants;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.factories.ManagedTableFactory;
+import org.apache.flink.table.resource.ResourceType;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.AlreadyExistsException;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableResponse;
+import software.amazon.awssdk.services.glue.model.Column;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.CreatePartitionRequest;
+import software.amazon.awssdk.services.glue.model.CreatePartitionResponse;
+import software.amazon.awssdk.services.glue.model.CreateTableRequest;
+import software.amazon.awssdk.services.glue.model.CreateTableResponse;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.Database;
+import software.amazon.awssdk.services.glue.model.DatabaseInput;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.DeletePartitionRequest;
+import software.amazon.awssdk.services.glue.model.DeletePartitionResponse;
+import software.amazon.awssdk.services.glue.model.DeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.DeleteTableResponse;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.EntityNotFoundException;
+import software.amazon.awssdk.services.glue.model.GetDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.GetDatabasesRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabasesResponse;
+import software.amazon.awssdk.services.glue.model.GetPartitionRequest;
+import software.amazon.awssdk.services.glue.model.GetPartitionResponse;
+import software.amazon.awssdk.services.glue.model.GetPartitionsRequest;
+import software.amazon.awssdk.se

[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175948882


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/GlueCatalog.java:
##
@@ -0,0 +1,1073 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.connector.aws.config.AWSConfig;
+import org.apache.flink.connector.aws.util.AwsClientFactories;
+import org.apache.flink.table.catalog.AbstractCatalog;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionImpl;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.ResolvedCatalogTable;
+import org.apache.flink.table.catalog.ResolvedCatalogView;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionNotExistException;
+import 
org.apache.flink.table.catalog.exceptions.PartitionAlreadyExistsException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException;
+import org.apache.flink.table.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.exceptions.TablePartitionedException;
+import org.apache.flink.table.catalog.glue.util.GlueOperator;
+import org.apache.flink.table.catalog.glue.util.GlueUtils;
+import org.apache.flink.table.catalog.stats.CatalogColumnStatistics;
+import org.apache.flink.table.catalog.stats.CatalogTableStatistics;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.functions.FunctionIdentifier;
+import org.apache.flink.util.StringUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.Partition;
+import software.amazon.awssdk.services.glue.model.Table;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** A Glue catalog implementation that uses glue catalog. */
+@PublicEvolving
+public class GlueCatalog extends AbstractCatalog {
+
+/** instance of GlueOperator to facilitate glue related actions. */
+public GlueOperator glueOperator;
+
+/** Default database name if not passed as part of catalog. */
+public static final String DEFAULT_DB = "default";
+
+private static final Logger LOG = 
LoggerFactory.getLogger(GlueCatalog.class);
+
+public GlueCatalog(String catalogName, String databaseName, ReadableConfig 
catalogConfig) {
+super(catalogName, databaseName);
+checkNotNull(catalogConfig, "Catalog config cannot be null.");
+AWSConfig awsConfig = new AWSConfig(catalogConfig);
+GlueClient glueClient = AwsClientFactories.factory(awsConfig).glue();
+String catalogPath = catalogConfig.get(GlueCatalogOptions.PATH);
+this.glueOperator = new GlueOperator(getName(), catalogPath, 
awsConfig, glueClient);
+}
+
+@Visible

[jira] [Closed] (FLINK-31588) The unaligned checkpoint type is wrong at subtask level

2023-04-24 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan closed FLINK-31588.
---
Fix Version/s: 1.16.2
   1.17.1
   Resolution: Fixed

> The unaligned checkpoint type is wrong at subtask level
> ---
>
> Key: FLINK-31588
> URL: https://issues.apache.org/jira/browse/FLINK-31588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
> Attachments: image-2023-03-23-18-45-01-535.png
>
>
> FLINK-20488 supported show checkpoint type for each subtask, and it based on 
> received `CheckpointOptions` and it's right.
> However, FLINK-27251 supported timeout aligned to unaligned checkpoint 
> barrier in the output buffers. It means the received `CheckpointOptions` can 
> be converted from aligned checkpoint to unaligned checkpoint.
> So, the unaligned checkpoint type may be wrong at subtask level. For example, 
> as shown in the figure below, Unaligned checkpoint type is false, but it is 
> actually Unaligned checkpoint (persisted data > 0).
>  
> !image-2023-03-23-18-45-01-535.png|width=1879,height=797!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31588) The unaligned checkpoint type is wrong at subtask level

2023-04-24 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17715340#comment-17715340
 ] 

Rui Fan edited comment on FLINK-31588 at 4/25/23 2:23 AM:
--

Thanks [~pnowojski] for the review and discussion.

Merged master commit: d46d8d0f6b590f185608b23fbe8b2fcbded111de

Merged 1.17-release commit : a3c00eed371f8a2c9bfe557fa07dc7bc8a04f14e

Merged 1.16-release commit : 60cadec7c1f3beb3c9eb7e45cbd7ab9d99062e48


was (Author: fanrui):
Thanks [~pnowojski] for the review and discussion.

Merged master commit: d46d8d0f6b590f185608b23fbe8b2fcbded111de

> The unaligned checkpoint type is wrong at subtask level
> ---
>
> Key: FLINK-31588
> URL: https://issues.apache.org/jira/browse/FLINK-31588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
> Attachments: image-2023-03-23-18-45-01-535.png
>
>
> FLINK-20488 supported show checkpoint type for each subtask, and it based on 
> received `CheckpointOptions` and it's right.
> However, FLINK-27251 supported timeout aligned to unaligned checkpoint 
> barrier in the output buffers. It means the received `CheckpointOptions` can 
> be converted from aligned checkpoint to unaligned checkpoint.
> So, the unaligned checkpoint type may be wrong at subtask level. For example, 
> as shown in the figure below, Unaligned checkpoint type is false, but it is 
> actually Unaligned checkpoint (persisted data > 0).
>  
> !image-2023-03-23-18-45-01-535.png|width=1879,height=797!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui merged pull request #22453: [FLINK-31588][checkpoint] Correct the unaligned checkpoint type based on the bytesPersistedDuringAlignment

2023-04-24 Thread via GitHub


1996fanrui merged PR #22453:
URL: https://github.com/apache/flink/pull/22453


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui merged pull request #22454: [FLINK-31588][checkpoint] Correct the unaligned checkpoint type based on the bytesPersistedDuringAlignment

2023-04-24 Thread via GitHub


1996fanrui merged PR #22454:
URL: https://github.com/apache/flink/pull/22454


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #22454: [FLINK-31588][checkpoint] Correct the unaligned checkpoint type based on the bytesPersistedDuringAlignment

2023-04-24 Thread via GitHub


1996fanrui commented on PR #22454:
URL: https://github.com/apache/flink/pull/22454#issuecomment-1521063518

   Thanks for the review, merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #22453: [FLINK-31588][checkpoint] Correct the unaligned checkpoint type based on the bytesPersistedDuringAlignment

2023-04-24 Thread via GitHub


1996fanrui commented on PR #22453:
URL: https://github.com/apache/flink/pull/22453#issuecomment-1521062875

   Thanks for the review, merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175947290


##
flink-connector-aws-base/src/main/java/org/apache/flink/connector/aws/config/AWSConfigConstants.java:
##
@@ -144,6 +145,194 @@ public enum CredentialProvider {
 /** Read Request timeout for {@link SdkAsyncHttpClient}. */
 public static final String HTTP_CLIENT_READ_TIMEOUT_MILLIS = 
"aws.http-client.read-timeout";
 
+/**
+ * The type of {@link software.amazon.awssdk.http.SdkHttpClient} 
implementation used by {@link
+ * AwsClientFactory} If set, all AWS clients will use this specified HTTP 
client. If not set,
+ * HTTP_CLIENT_TYPE_DEFAULT will be used. For specific types supported, 
see HTTP_CLIENT_TYPE_*
+ * defined below.
+ */
+public static final String HTTP_CLIENT_TYPE = "http-client.type";
+
+/**
+ * Used to configure the connection acquisition timeout in milliseconds 
for {@link
+ * software.amazon.awssdk.http.apache.ApacheHttpClient.Builder}. This flag 
only works when
+ * {@link #HTTP_CLIENT_TYPE} is set to HTTP_CLIENT_TYPE_APACHE
+ *
+ * For more details, see
+ * 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html
+ */
+public static final String 
HTTP_CLIENT_APACHE_CONNECTION_ACQUISITION_TIMEOUT_MS =
+"http-client.apache.connection-acquisition-timeout-ms";
+
+/**
+ * If Glue should skip name validations It is recommended to stick to Glue 
best practice in
+ * https://docs.aws.amazon.com/athena/latest/ug/glue-best-practices.html 
to make sure operations
+ * are Hive compatible. This is only added for users that have existing 
conventions using
+ * non-standard characters. When database name and table name validation 
are skipped, there is
+ * no guarantee that downstream systems would all support the names.
+ */
+public static final String GLUE_CATALOG_SKIP_NAME_VALIDATION = 
"glue.skip-name-validation";
+
+/**
+ * Used to configure the connection max idle time in milliseconds for 
{@link
+ * software.amazon.awssdk.http.apache.ApacheHttpClient.Builder}. This flag 
only works when
+ * {@link #HTTP_CLIENT_TYPE} is set to HTTP_CLIENT_TYPE_APACHE
+ *
+ * For more details, see
+ * 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html
+ */
+public static final String HTTP_CLIENT_APACHE_CONNECTION_MAX_IDLE_TIME_MS =
+"http-client.apache.connection-max-idle-time-ms";
+
+/**
+ * Used to configure the connection time to live in milliseconds for {@link
+ * software.amazon.awssdk.http.apache.ApacheHttpClient.Builder}. This flag 
only works when
+ * {@link #HTTP_CLIENT_TYPE} is set to HTTP_CLIENT_TYPE_APACHE
+ *
+ * For more details, see
+ * 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html
+ */
+public static final String HTTP_CLIENT_APACHE_CONNECTION_TIME_TO_LIVE_MS =
+"http-client.apache.connection-time-to-live-ms";
+
+//  glue configs
+
+/**
+ * Used to configure the connection timeout in milliseconds for {@link
+ * software.amazon.awssdk.http.apache.ApacheHttpClient.Builder}. This flag 
only works when
+ * {@link #HTTP_CLIENT_TYPE} is set to HTTP_CLIENT_TYPE_APACHE
+ *
+ * For more details, see
+ * 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html
+ */
+public static final String HTTP_CLIENT_APACHE_CONNECTION_TIMEOUT_MS =
+"http-client.apache.connection-timeout-ms";
+
+/**
+ * Used to configure whether to enable the expect continue setting for 
{@link
+ * software.amazon.awssdk.http.apache.ApacheHttpClient.Builder}. This flag 
only works when
+ * {@link #HTTP_CLIENT_TYPE} is set to HTTP_CLIENT_TYPE_APACHE
+ *
+ * In default, this is disabled.
+ *
+ * For more details, see
+ * 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html
+ */
+public static final String HTTP_CLIENT_APACHE_EXPECT_CONTINUE_ENABLED =
+"http-client.apache.expect-continue-enabled";
+
+/**
+ * Used to configure the max connections number for {@link
+ * software.amazon.awssdk.http.apache.ApacheHttpClient.Builder}. This flag 
only works when
+ * {@link #HTTP_CLIENT_TYPE} is set to HTTP_CLIENT_TYPE_APACHE
+ *
+ * For more details, see
+ * 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.Builder.html
+ */
+public static final String HTTP_CLIENT_APACHE_MAX_CONNECTIONS =
+"http-client.apache.max-connections";
+
+/**
+ * Used to configure the socket timeout in milliseconds for {@link
+ * software.amazon.a

[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175946496


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/util/GlueOperator.java:
##
@@ -0,0 +1,1359 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.util;
+
+import org.apache.flink.connector.aws.config.AWSConfig;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogFunctionImpl;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.glue.GlueCatalogConstants;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.factories.ManagedTableFactory;
+import org.apache.flink.table.resource.ResourceType;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.AlreadyExistsException;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableResponse;
+import software.amazon.awssdk.services.glue.model.Column;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.CreatePartitionRequest;
+import software.amazon.awssdk.services.glue.model.CreatePartitionResponse;
+import software.amazon.awssdk.services.glue.model.CreateTableRequest;
+import software.amazon.awssdk.services.glue.model.CreateTableResponse;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.Database;
+import software.amazon.awssdk.services.glue.model.DatabaseInput;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.DeletePartitionRequest;
+import software.amazon.awssdk.services.glue.model.DeletePartitionResponse;
+import software.amazon.awssdk.services.glue.model.DeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.DeleteTableResponse;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.EntityNotFoundException;
+import software.amazon.awssdk.services.glue.model.GetDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.GetDatabasesRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabasesResponse;
+import software.amazon.awssdk.services.glue.model.GetPartitionRequest;
+import software.amazon.awssdk.services.glue.model.GetPartitionResponse;
+import software.amazon.awssdk.services.glue.model.GetPartitionsRequest;
+import software.amazon.awssdk.se

[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175945787


##
flink-catalog-aws/flink-catalog-aws-glue/src/test/java/org/apache/flink/table/catalog/glue/GlueCatalogTest.java:
##
@@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue;
+
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.connector.aws.config.AWSConfig;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotEmptyException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.glue.util.GlueOperator;
+
+import org.apache.flink.shaded.guava30.com.google.common.collect.ImmutableMap;
+import org.apache.flink.shaded.guava30.com.google.common.collect.Lists;
+
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.Test;
+import org.mockito.Mockito;
+import software.amazon.awssdk.http.SdkHttpResponse;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.Database;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.GetDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.GetDatabasesRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabasesResponse;
+import software.amazon.awssdk.services.glue.model.GetTablesRequest;
+import software.amazon.awssdk.services.glue.model.GetTablesResponse;
+import 
software.amazon.awssdk.services.glue.model.GetUserDefinedFunctionsRequest;
+import 
software.amazon.awssdk.services.glue.model.GetUserDefinedFunctionsResponse;
+import software.amazon.awssdk.services.glue.model.Table;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/** GlueCatalog Test. */
+public class GlueCatalogTest {

Review Comment:
   
   GlueCatalog and GlueOperator UT will be duplicate.
   as it is evident from code `GlueCatalog` delegates its tasks to 
`GlueOperator` that takes care of it (Strategy pattern) in future if if have 
aws sdk 3 then write new glue operator and plugin it will support whichever 
user wants.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zoltar9264 commented on pull request #22470: [FLINK-31900] Fix some typo in java doc, comments and assertion message

2023-04-24 Thread via GitHub


zoltar9264 commented on PR #22470:
URL: https://github.com/apache/flink/pull/22470#issuecomment-1521057618

   Thanks @reswqa and @TanYuxin-tyx for help review the pr ! Since the ci is 
passed , can you help me merge it ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Aitozi commented on pull request #22455: [FLINK-31828][planner] Use the inner serializer for RawType when doin…

2023-04-24 Thread via GitHub


Aitozi commented on PR #22455:
URL: https://github.com/apache/flink/pull/22455#issuecomment-1521057087

   @wuchong, @twalthr could you help review this fix ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread chouc (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716059#comment-17716059
 ] 

chouc commented on FLINK-31914:
---

{color:#00}The FlinkKafkaInternalProducer is registered in Closer and 
closer is closed when Flink Writer invoke close.

{color}

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-24-16-11-22-251.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-24-16-11-22-251.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in k8s with IPv6 stack

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31928:
--
Summary: flink-kubernetes works not properly in k8s with IPv6 stack  (was: 
flink-kubernetes works not properly in IPv6 environment in k8s)

> flink-kubernetes works not properly in k8s with IPv6 stack
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Kubernetes Operator
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Blocker
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31930) MetricQueryService works not properly in k8s with IPv6 stack

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31930:
--
Component/s: Runtime / RPC

> MetricQueryService works not properly in k8s with IPv6 stack
> 
>
> Key: FLINK-31930
> URL: https://issues.apache.org/jira/browse/FLINK-31930
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
> Environment: 1. K8s with ipv6 stack
> 2. Deploy flink-kubernetes-operator
> 3. Deploy a standalone cluster with 3 taskmanager using kubernetes 
> high-availability.
>Reporter: caiyi
>Priority: Blocker
> Attachments: 1.jpg
>
>
> As attachment below, MetricQueryService works not properly in k8s with IPv6 
> stack.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s with IPv6 stack

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31929:
--
Summary: HighAvailabilityServicesUtils.getWebMonitorAddress works not 
properly in k8s with IPv6 stack  (was: 
HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in IPv6 
stack of k8s)

> HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s 
> with IPv6 stack
> 
>
> Key: FLINK-31929
> URL: https://issues.apache.org/jira/browse/FLINK-31929
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
> Environment: K8s with IPv6 stack
>Reporter: caiyi
>Priority: Blocker
> Attachments: 1.jpg
>
>
> As attachment below, String.format works not properly if address is IPv6, 
> new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31930) MetricQueryService works not properly in k8s with IPv6 stack

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31930:
--
Attachment: 1.jpg

> MetricQueryService works not properly in k8s with IPv6 stack
> 
>
> Key: FLINK-31930
> URL: https://issues.apache.org/jira/browse/FLINK-31930
> Project: Flink
>  Issue Type: Bug
> Environment: 1. K8s with ipv6 stack
> 2. Deploy flink-kubernetes-operator
> 3. Deploy a standalone cluster with 3 taskmanager using kubernetes 
> high-availability.
>Reporter: caiyi
>Priority: Blocker
> Attachments: 1.jpg
>
>
> As attachment below, MetricQueryService works not properly in k8s with IPv6 
> stack.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31930) MetricQueryService works not properly in k8s with IPv6 stack

2023-04-24 Thread caiyi (Jira)
caiyi created FLINK-31930:
-

 Summary: MetricQueryService works not properly in k8s with IPv6 
stack
 Key: FLINK-31930
 URL: https://issues.apache.org/jira/browse/FLINK-31930
 Project: Flink
  Issue Type: Bug
 Environment: 1. K8s with ipv6 stack

2. Deploy flink-kubernetes-operator

3. Deploy a standalone cluster with 3 taskmanager using kubernetes 
high-availability.
Reporter: caiyi


As attachment below, MetricQueryService works not properly in k8s with IPv6 
stack.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in IPv6 environment in k8s

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31928:
--
Description: 
As 
[https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
 ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
need to upgrade okhttp3 to version 4.10.0 and shade dependency of okhttp3:4.10.0
org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
kubernetes-client to latest version, and release a new version of 
flink-kubernetes-operator.

  was:
As 
[https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
 ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
need to upgrade okhttp3 to version 4.10.0 and shade dependency of okhttp3:4.10.0
org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes, and release a new 
version of flink-kubernetes-operator.


> flink-kubernetes works not properly in IPv6 environment in k8s
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Kubernetes Operator
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Critical
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in IPv6 environment in k8s

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31928:
--
Priority: Blocker  (was: Critical)

> flink-kubernetes works not properly in IPv6 environment in k8s
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Kubernetes Operator
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Blocker
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in IPv6 stack of k8s

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31929:
--
Priority: Blocker  (was: Critical)

> HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in IPv6 
> stack of k8s
> --
>
> Key: FLINK-31929
> URL: https://issues.apache.org/jira/browse/FLINK-31929
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
> Environment: K8s with IPv6 stack
>Reporter: caiyi
>Priority: Blocker
> Attachments: 1.jpg
>
>
> As attachment below, String.format works not properly if address is IPv6, 
> new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31914) Failing to close FlinkKafkaInternalProducer created in KafkaWriter with exactly-once semantic results in memory leak

2023-04-24 Thread datariver (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

datariver updated FLINK-31914:
--
Affects Version/s: 1.16.0

> Failing to close FlinkKafkaInternalProducer created in KafkaWriter with 
> exactly-once semantic results in memory leak
> 
>
> Key: FLINK-31914
> URL: https://issues.apache.org/jira/browse/FLINK-31914
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.16.0, 1.17.0
>Reporter: datariver
>Priority: Major
> Attachments: image-2023-04-24-16-11-22-251.png
>
>
> Hi [~arvid] , If Exactly-Once writing is enabled, Kafka's transactional 
> writing will be used. KafkaWriter will create FlinkKafkaInternalProducer in 
> the initialization and snapshotState methods, but there is no place to close 
> it. As Checkpoints increase, Producers will continue to accumulate. Each 
> Producer maintains a Buffer, which will cause memory leaks and Job OOM.
> By dumping an in-memory instance of Task Manager, you can see that there are 
> a lot of Producers:
> !image-2023-04-24-16-11-22-251.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in IPv6 stack of k8s

2023-04-24 Thread caiyi (Jira)
caiyi created FLINK-31929:
-

 Summary: HighAvailabilityServicesUtils.getWebMonitorAddress works 
not properly in IPv6 stack of k8s
 Key: FLINK-31929
 URL: https://issues.apache.org/jira/browse/FLINK-31929
 Project: Flink
  Issue Type: Bug
  Components: Runtime / REST
 Environment: K8s with IPv6 stack
Reporter: caiyi
 Attachments: 1.jpg

As attachment below, String.format works not properly if address is IPv6, 
new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in IPv6 environment in k8s

2023-04-24 Thread caiyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caiyi updated FLINK-31928:
--
Description: 
As 
[https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
 ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
need to upgrade okhttp3 to version 4.10.0 and shade dependency of okhttp3:4.10.0
org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes, and release a new 
version of flink-kubernetes-operator.

  was:
As 
[https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
 ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
need to upgrade okhttp3 to version 4.10.0 and dependency 
org.jetbrains.kotlin:kotlin-stdlib shaded in flink-kubernetes, and release a 
new version of flink-kubernetes-operator.


> flink-kubernetes works not properly in IPv6 environment in k8s
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Kubernetes Operator
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Critical
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes, and release a new 
> version of flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31928) flink-kubernetes works not properly in IPv6 environment in k8s

2023-04-24 Thread caiyi (Jira)
caiyi created FLINK-31928:
-

 Summary: flink-kubernetes works not properly in IPv6 environment 
in k8s
 Key: FLINK-31928
 URL: https://issues.apache.org/jira/browse/FLINK-31928
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes, Kubernetes Operator
 Environment: Kubernetes of IPv6 stack.
Reporter: caiyi


As 
[https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
 ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
need to upgrade okhttp3 to version 4.10.0 and dependency 
org.jetbrains.kotlin:kotlin-stdlib shaded in flink-kubernetes, and release a 
new version of flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31884) Upgrade ExecNode to new version causes the old serialized plan failed to pass Json SerDe round trip

2023-04-24 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-31884:
---

Assignee: Jane Chan

> Upgrade ExecNode to new version causes the old serialized plan failed to pass 
> Json SerDe round trip
> ---
>
> Key: FLINK-31884
> URL: https://issues.apache.org/jira/browse/FLINK-31884
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> h4. How to Reproduce
> Firstly, add a test to dump the compiled plan JSON.
> {code:java}
> @Test
> public void debug() {
> tableEnv.executeSql("create table foo (f0 int, f1 string) with 
> ('connector' = 'datagen')");
> tableEnv.executeSql("create table bar (f0 int, f1 string) with 
> ('connector' = 'print')");
> tableEnv.compilePlanSql("insert into bar select * from foo")
> .writeToFile(new File("/path/to/debug.json"));
> }
> {code}
> The JSON context is as follows
> {code:json}
> {
>   "flinkVersion" : "1.18",
>   "nodes" : [ {
> "id" : 1,
> "type" : "stream-exec-table-source-scan_1",
> "scanTableSource" : {
>   "table" : {
> "identifier" : "`default_catalog`.`default_database`.`foo`",
> "resolvedTable" : {
>   "schema" : {
> "columns" : [ {
>   "name" : "f0",
>   "dataType" : "INT"
> }, {
>   "name" : "f1",
>   "dataType" : "VARCHAR(2147483647)"
> } ],
> "watermarkSpecs" : [ ]
>   },
>   "partitionKeys" : [ ],
>   "options" : {
> "connector" : "datagen"
>   }
> }
>   }
> },
> "outputType" : "ROW<`f0` INT, `f1` VARCHAR(2147483647)>",
> "description" : "TableSourceScan(table=[[default_catalog, 
> default_database, foo]], fields=[f0, f1])",
> "inputProperties" : [ ]
>   }, {
> "id" : 2,
> "type" : "stream-exec-sink_1",
> "configuration" : {
>   "table.exec.sink.keyed-shuffle" : "AUTO",
>   "table.exec.sink.not-null-enforcer" : "ERROR",
>   "table.exec.sink.type-length-enforcer" : "IGNORE",
>   "table.exec.sink.upsert-materialize" : "AUTO"
> },
> "dynamicTableSink" : {
>   "table" : {
> "identifier" : "`default_catalog`.`default_database`.`bar`",
> "resolvedTable" : {
>   "schema" : {
> "columns" : [ {
>   "name" : "f0",
>   "dataType" : "INT"
> }, {
>   "name" : "f1",
>   "dataType" : "VARCHAR(2147483647)"
> } ],
> "watermarkSpecs" : [ ]
>   },
>   "partitionKeys" : [ ],
>   "options" : {
> "connector" : "print"
>   }
> }
>   }
> },
> "inputChangelogMode" : [ "INSERT" ],
> "inputProperties" : [ {
>   "requiredDistribution" : {
> "type" : "UNKNOWN"
>   },
>   "damBehavior" : "PIPELINED",
>   "priority" : 0
> } ],
> "outputType" : "ROW<`f0` INT, `f1` VARCHAR(2147483647)>",
> "description" : "Sink(table=[default_catalog.default_database.bar], 
> fields=[f0, f1])"
>   } ],
>   "edges" : [ {
> "source" : 1,
> "target" : 2,
> "shuffle" : {
>   "type" : "FORWARD"
> },
> "shuffleMode" : "PIPELINED"
>   } ]
> }
> {code}
> Then upgrade the StreamExecSink to a new version
> {code:java}
> @ExecNodeMetadata(
> name = "stream-exec-sink",
> version = 1,
> consumedOptions = {
> "table.exec.sink.not-null-enforcer",
> "table.exec.sink.type-length-enforcer",
> "table.exec.sink.upsert-materialize",
> "table.exec.sink.keyed-shuffle"
> },
> producedTransformations = {
> CommonExecSink.CONSTRAINT_VALIDATOR_TRANSFORMATION,
> CommonExecSink.PARTITIONER_TRANSFORMATION,
> CommonExecSink.UPSERT_MATERIALIZE_TRANSFORMATION,
> CommonExecSink.TIMESTAMP_INSERTER_TRANSFORMATION,
> CommonExecSink.SINK_TRANSFORMATION
> },
> minPlanVersion = FlinkVersion.v1_15,
> minStateVersion = FlinkVersion.v1_15)
> @ExecNodeMetadata(
> name = "stream-exec-sink",
> version = 2,
> consumedOptions = {
> "table.exec.sink.not-null-enforcer",
> "table.exec.sink.type-length-enforcer",
> "table.exec.sink.upsert-materialize",
> "table.exec.sink.keyed-shuffle"
> },
> producedTransformations = {
> CommonExecSink.CONSTRAINT_VALIDATOR_TRANSFORMATION,
> 

[GitHub] [flink] zzzzzzzs commented on a diff in pull request #22452: [FLINK-31881][docs] Fix the confusion in the Chinese documentation page of Flink official CEP, and add the Scala code of timesOrMo

2023-04-24 Thread via GitHub


zzzs commented on code in PR #22452:
URL: https://github.com/apache/flink/pull/22452#discussion_r1175917651


##
docs/content.zh/docs/libs/cep.md:
##
@@ -477,6 +477,11 @@ pattern.oneOrMore()
 pattern.timesOrMore(2);
 ```
 {{< /tab >}}
+{{< tab "Scala" >}}
+```scala
+pattern.timesOrMore(2)
+```
+{{< /tab >}}

Review Comment:
   Sorry, as Flink has deprecated Scala, I have removed this section from the 
document.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-31898) Flink k8s autoscaler does not work as expected

2023-04-24 Thread Kyungmin Kim (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17715632#comment-17715632
 ] 

Kyungmin Kim edited comment on FLINK-31898 at 4/25/23 12:50 AM:


Hi I found that after upgrading flink version 1.16.1 to 1.17.0, the 
numRecordsIn/OutPerSecond metric of source operator doubled.

!image-2023-04-24-16-47-35-697.png|width=554,height=191!

I upgraded at 04/05.

Same issue posted below. 

 

https://issues.apache.org/jira/browse/FLINK-31752?jql=project%20%3D%20FLINK%20AND%20text%20~%20numrecordsout

 

I found that in 'computeTargetDataRate' method of main branch, the 
'outputRateMultiplier' value is multiplied to the 'inputTargetRate' and I think 
it might effect on the 'TARGET_DATA_RATE' value which is used in calculating 
the scaleFactor (bigger than expected).

Again, I'm using the main branch(latest) not the 1.14.0 version.

I hope it helps. Thank you!

 

+ Even when I use the code fixing the bug above, the scale factor is decreased 
by half but autoscaler still increases the parallelism high and repeats the 
scale down again.


was (Author: JIRAUSER299786):
Hi I found that after upgrading flink version 1.16.1 to 1.17.0, the 
numRecordsIn/OutPerSecond metric of source operator doubled.

!image-2023-04-24-16-47-35-697.png|width=554,height=191!

I upgraded at 04/05.

Same issue posted below. 

 

https://issues.apache.org/jira/browse/FLINK-31752?jql=project%20%3D%20FLINK%20AND%20text%20~%20numrecordsout

 

I found that in 'computeTargetDataRate' method of main branch, the 
'outputRateMultiplier' value is multiplied to the 'inputTargetRate' and I think 
it might effect on the 'TARGET_DATA_RATE' value which is used in calculating 
the scaleFactor (bigger than expected).

Again, I'm using the main branch(latest) not the 1.14.0 version.

I hope it helps. Thank you!

> Flink k8s autoscaler does not work as expected
> --
>
> Key: FLINK-31898
> URL: https://issues.apache.org/jira/browse/FLINK-31898
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Kyungmin Kim
>Priority: Major
> Attachments: image-2023-04-24-10-54-58-083.png, 
> image-2023-04-24-13-27-17-478.png, image-2023-04-24-13-28-15-462.png, 
> image-2023-04-24-13-31-06-420.png, image-2023-04-24-13-41-43-040.png, 
> image-2023-04-24-13-42-40-124.png, image-2023-04-24-13-43-49-431.png, 
> image-2023-04-24-13-44-17-479.png, image-2023-04-24-14-18-12-450.png, 
> image-2023-04-24-16-47-35-697.png
>
>
> Hi I'm using Flink k8s autoscaler to automatically deploy jobs in proper 
> parallelism.
> I was using 1.4 version but I found that it does not scale down properly 
> because TRUE_PROCESSING_RATE becoming NaN when the tasks are idled.
> In the main branch, I checked the code was fixed to set TRUE_PROCESSING_RATE 
> to positive infinity and make scaleFactor to very low value so I'm now 
> experimentally using docker image built with main branch of 
> Flink-k8s-operator repository in my job.
> It now scales down properly but the problem is, it does not converge to the 
> optimal parallelism. It scales down well but it jumps up again to high 
> parallelism. 
>  
> Below is the experimental setup and my figure of parallelism changes result.
>  * about 40 RPS
>  * each task can process 10 TPS (intended throttling)
> !image-2023-04-24-10-54-58-083.png|width=999,height=266!
> Even using default configuration leads to the same result. What can I do 
> more? Thank you.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] noomanee commented on pull request #22400: [FLINK-31689][flink-connector-files] Check expired file state before …

2023-04-24 Thread via GitHub


noomanee commented on PR #22400:
URL: https://github.com/apache/flink/pull/22400#issuecomment-1521005073

   @MartijnVisser Gentle Reminder


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] tisonkun merged pull request #43: [FLINK-31748] Dummy implementation to fix compilation failure

2023-04-24 Thread via GitHub


tisonkun merged PR #43:
URL: https://github.com/apache/flink-connector-pulsar/pull/43


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] tisonkun commented on pull request #43: [FLINK-31748] Dummy implementation to fix compilation failure

2023-04-24 Thread via GitHub


tisonkun commented on PR #43:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/43#issuecomment-1520993271

   Merging...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] vahmed-hamdy commented on a diff in pull request #70: [FLINK-31772] Adjusting Kinesis Ratelimiting strategy to fix performance regression

2023-04-24 Thread via GitHub


vahmed-hamdy commented on code in PR #70:
URL: 
https://github.com/apache/flink-connector-aws/pull/70#discussion_r1175722682


##
flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriter.java:
##
@@ -144,6 +150,8 @@
 .setMaxBufferedRequests(maxBufferedRequests)
 .setMaxTimeInBufferMS(maxTimeInBufferMS)
 .setMaxRecordSizeInBytes(maxRecordSizeInBytes)
+.setRateLimitingStrategy(

Review Comment:
   @dannycranmer Do you think we should follow up with regression testing for 
KDF/DDB?
   
   I am skeptical to enforce it against `AsyncSink` as the previous value is 
standard for AIMD and could be addressed by sink implementer as suitable. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mxm merged pull request #574: [FLINK-31827] Discard old format scaling history

2023-04-24 Thread via GitHub


mxm merged PR #574:
URL: https://github.com/apache/flink-kubernetes-operator/pull/574


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mxm commented on a diff in pull request #574: [FLINK-31827] Discard old format scaling history

2023-04-24 Thread via GitHub


mxm commented on code in PR #574:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/574#discussion_r1175710518


##
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerInfo.java:
##
@@ -95,7 +95,7 @@ public SortedMap 
getMetricHistory() {
 
 try {
 return YAML_MAPPER.readValue(decompress(historyYaml), new 
TypeReference<>() {});
-} catch (JsonProcessingException e) {
+} catch (JacksonException e) {

Review Comment:
   The reason I changed the exception type is that this is the base Jackson 
exception which catches all Jackson related errors which we can't do anything 
about. Leaving this as-is for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31927) Cassandra source raises an exception on Flink 1.16.0 on a real Cassandra cluster

2023-04-24 Thread Etienne Chauchot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Etienne Chauchot reassigned FLINK-31927:


Assignee: Etienne Chauchot

> Cassandra source raises an exception on Flink 1.16.0 on a real Cassandra 
> cluster
> 
>
> Key: FLINK-31927
> URL: https://issues.apache.org/jira/browse/FLINK-31927
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.16.0, cassandra-3.1.0
>Reporter: Etienne Chauchot
>Assignee: Etienne Chauchot
>Priority: Blocker
>
> CassandraSplitEnumerator#prepareSplits() raises  
> java.lang.NoClassDefFoundError: com/codahale/metrics/Gauge when calling 
> cluster.getMetadata() leading to NPE in CassandraSplitEnumerator#start() 
> async callback. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31927) Cassandra source raises an exception on Flink 1.16.0 on a real Cassandra cluster

2023-04-24 Thread Etienne Chauchot (Jira)
Etienne Chauchot created FLINK-31927:


 Summary: Cassandra source raises an exception on Flink 1.16.0 on a 
real Cassandra cluster
 Key: FLINK-31927
 URL: https://issues.apache.org/jira/browse/FLINK-31927
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Cassandra
Affects Versions: 1.16.0, cassandra-3.1.0
Reporter: Etienne Chauchot


CassandraSplitEnumerator#prepareSplits() raises  
java.lang.NoClassDefFoundError: com/codahale/metrics/Gauge when calling 
cluster.getMetadata() leading to NPE in CassandraSplitEnumerator#start() async 
callback. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] chachae commented on pull request #22446: [FLINK-28060][BP 1.15][Connector/Kafka] Updated Kafka Clients to 3.2.1

2023-04-24 Thread via GitHub


chachae commented on PR #22446:
URL: https://github.com/apache/flink/pull/22446#issuecomment-1520556636

   > @chachae Flink 1.15 is no longer community supported; therefore closing 
the PR.
   
   Okay,but we use Flink 1.15 on a large scale, and it's hard to carry out 
large scale version upgrade for flink jobs with strong dependence on state


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] chachae commented on pull request #22446: [FLINK-28060][BP 1.15][Connector/Kafka] Updated Kafka Clients to 3.2.1

2023-04-24 Thread via GitHub


chachae commented on PR #22446:
URL: https://github.com/apache/flink/pull/22446#issuecomment-1520556021

   > @chachae Flink 1.15 is no longer community supported; therefore closing 
the PR.
   
   
   
   > @chachae Flink 1.15 is no longer community supported; therefore closing 
the PR.
   
   Okay,but we use Flink 1.15 on a large scale, and it's hard to carry out 
large scale version upgrade for flink jobs with strong dependence on state


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] chachae commented on pull request #22446: [FLINK-28060][BP 1.15][Connector/Kafka] Updated Kafka Clients to 3.2.1

2023-04-24 Thread via GitHub


chachae commented on PR #22446:
URL: https://github.com/apache/flink/pull/22446#issuecomment-1520554286

   > 
   
   Okay,but we use Flink 1.15 on a large scale, and it's hard to carry out 
large scale version upgrade for flink jobs with strong dependence on state


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pgaref commented on pull request #22467: FLINK-31888: Introduce interfaces and utils for loading and executing enrichers

2023-04-24 Thread via GitHub


pgaref commented on PR #22467:
URL: https://github.com/apache/flink/pull/22467#issuecomment-1520551610

   @dmvk / @akalash  can you please take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31922) Port over Kinesis Client configurations for retry and backoff

2023-04-24 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-31922:
--
Component/s: Connectors / AWS

> Port over Kinesis Client configurations for retry and backoff
> -
>
> Key: FLINK-31922
> URL: https://issues.apache.org/jira/browse/FLINK-31922
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Hong Liang Teoh
>Assignee: Daren Wong
>Priority: Major
>
> Port over the Kinesis Client configurations for GetRecords, ListShards, 
> DescribeStream



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31922) Port over Kinesis Client configurations for retry and backoff

2023-04-24 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-31922:
--
Fix Version/s: aws-connector-4.2.0

> Port over Kinesis Client configurations for retry and backoff
> -
>
> Key: FLINK-31922
> URL: https://issues.apache.org/jira/browse/FLINK-31922
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Hong Liang Teoh
>Assignee: Daren Wong
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> Port over the Kinesis Client configurations for GetRecords, ListShards, 
> DescribeStream



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31922) Port over Kinesis Client configurations for retry and backoff

2023-04-24 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-31922:
-

Assignee: Daren Wong

> Port over Kinesis Client configurations for retry and backoff
> -
>
> Key: FLINK-31922
> URL: https://issues.apache.org/jira/browse/FLINK-31922
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Hong Liang Teoh
>Assignee: Daren Wong
>Priority: Major
>
> Port over the Kinesis Client configurations for GetRecords, ListShards, 
> DescribeStream



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-24 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1175579217


##
flink-catalog-aws/flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/util/GlueOperator.java:
##
@@ -0,0 +1,1359 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.util;
+
+import org.apache.flink.connector.aws.config.AWSConfig;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogFunctionImpl;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.glue.GlueCatalogConstants;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.factories.ManagedTableFactory;
+import org.apache.flink.table.resource.ResourceType;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.AlreadyExistsException;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableResponse;
+import software.amazon.awssdk.services.glue.model.Column;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.CreatePartitionRequest;
+import software.amazon.awssdk.services.glue.model.CreatePartitionResponse;
+import software.amazon.awssdk.services.glue.model.CreateTableRequest;
+import software.amazon.awssdk.services.glue.model.CreateTableResponse;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.Database;
+import software.amazon.awssdk.services.glue.model.DatabaseInput;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.DeletePartitionRequest;
+import software.amazon.awssdk.services.glue.model.DeletePartitionResponse;
+import software.amazon.awssdk.services.glue.model.DeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.DeleteTableResponse;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.EntityNotFoundException;
+import software.amazon.awssdk.services.glue.model.GetDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.GetDatabasesRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabasesResponse;
+import software.amazon.awssdk.services.glue.model.GetPartitionRequest;
+import software.amazon.awssdk.services.glue.model.GetPartitionResponse;
+import software.amazon.awssdk.services.glue.model.GetPartitionsRequest;
+import software.amazon.awssdk.se

[jira] [Created] (FLINK-31926) Implement rename Table in GlueCatalog

2023-04-24 Thread Samrat Deb (Jira)
Samrat Deb created FLINK-31926:
--

 Summary: Implement rename Table in GlueCatalog
 Key: FLINK-31926
 URL: https://issues.apache.org/jira/browse/FLINK-31926
 Project: Flink
  Issue Type: Sub-task
Reporter: Samrat Deb


Glue catalog don't support renaming table. 
Currently marked as unsupported operation . 

This task intend to implement it later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] mxm commented on a diff in pull request #574: [FLINK-31827] Discard old format scaling history

2023-04-24 Thread via GitHub


mxm commented on code in PR #574:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/574#discussion_r1175535065


##
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerInfo.java:
##
@@ -95,7 +95,7 @@ public SortedMap 
getMetricHistory() {
 
 try {
 return YAML_MAPPER.readValue(decompress(historyYaml), new 
TypeReference<>() {});
-} catch (JsonProcessingException e) {
+} catch (JacksonException e) {

Review Comment:
   I deliberately did not catch Exception, just to avoid masking other errors, 
but we certainly can.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-25003) RestClientTest#testConnectionTimeout fails on Java 17

2023-04-24 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-25003.

Resolution: Cannot Reproduce

> RestClientTest#testConnectionTimeout fails on Java 17
> -
>
> Key: FLINK-25003
> URL: https://issues.apache.org/jira/browse/FLINK-25003
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> The test fails because the exception type has changed; with the returned 
> exception no longer being a ConnectException but just a plain 
> SocketException, presumably because the JDK fails earlier since it can't find 
> a route.
> We could change the assertion, or change the test somehow to actually work 
> against a server which doesn't allow the establishment of a connection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >