Re: [PR] [FLINK-34956][doc] Fix the config type wrong of Duration [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24583:
URL: https://github.com/apache/flink/pull/24583#issuecomment-2024535345

   
   ## CI report:
   
   * 7c766a4396a249885229b4c5924d1b9e043a54bf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34548][API] Introduce DataStream, Partitioning and ProcessFunction [flink]

2024-03-28 Thread via GitHub


Sxnan commented on code in PR #24422:
URL: https://github.com/apache/flink/pull/24422#discussion_r1540748320


##
flink-datastream-api/src/main/java/org/apache/flink/datastream/api/ExecutionEnvironment.java:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.datastream.api;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.RuntimeExecutionMode;
+
+/**
+ * This is the context in which a program is executed.
+ *
+ * The environment provides methods to create a DataStream and control the 
job execution.
+ */
+@Experimental
+public interface ExecutionEnvironment {
+/**
+ * Get the execution environment instance.
+ *
+ * @return A {@link ExecutionEnvironment} instance.
+ */
+static ExecutionEnvironment getInstance() throws 
ReflectiveOperationException {

Review Comment:
   This should be `getExecutionEnvironment` according to the FLIP.



##
flink-datastream/src/main/java/org/apache/flink/datastream/impl/ExecutionEnvironmentFactory.java:
##
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.datastream.impl;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.datastream.api.ExecutionEnvironment;
+
+/** Factory class for execution environments. */
+@FunctionalInterface
+public interface ExecutionEnvironmentFactory {
+/**
+ * Creates a StreamExecutionEnvironment from this factory.
+ *
+ * @return A StreamExecutionEnvironment.

Review Comment:
   `StreamExecutionEnvironment` should be `ExecutionEnvironment`



##
flink-datastream/src/main/java/org/apache/flink/datastream/impl/common/TimestampCollector.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.datastream.impl.common;
+
+import org.apache.flink.datastream.api.common.Collector;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+
+/** The base {@link Collector} which take care of records timestamp. */
+public abstract class TimestampCollector implements Collector {
+protected final StreamRecord reuse = new StreamRecord<>(null);
+
+public void setTimestamp(StreamRecord timestampBase) {
+if (timestampBase.hasTimestamp()) {
+setAbsoluteTimestamp(timestampBase.getTimestamp());
+} else {
+eraseTimestamp();
+}
+}
+
+public void setAbsoluteTimestamp(long timestamp) {

Review Comment:
   The name `absoluteTimestamp` is a little bit confusing to me. Maybe just 
name it `setTimestamp`. 
   
   And we may rename the `s

[jira] [Commented] (FLINK-34551) Align retry mechanisms of FutureUtils

2024-03-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831657#comment-17831657
 ] 

Matthias Pohl commented on FLINK-34551:
---

The intention of this ticket came from FLINK-34227 where I wanted to add logic 
for retrying forever. I managed to split the 
{{retrySuccessfulOperationWithDelay}} in FLINK-34227 in a way now that I didn't 
generate too much additional redundant code. I created FLINK-34551 as a 
follow-up anyway because I noticed that {{retrySuccessfulOperationWithDelay}} 
and  {{retryOperation}} share some common logic and that we could improve the 
way how these methods decide on which executor to run the {{operation}} on 
(scheduledExecutor vs calling thread).

Your current proposal has still redundant code. We would need to iterate over 
the change a bit more and discuss the contract of these methods in more detail. 
But unfortunately, I am gone for quite a bit soon. So, I would not be able to 
help you. Additionally, it's not a high-priority task right. I'm wondering 
whether we should unassign the task again. I want to avoid that you spend time 
on it and then get stuck because of missing feedback from my side.

I should have considered it yesterday already. Sorry for that.

> Align retry mechanisms of FutureUtils
> -
>
> Key: FLINK-34551
> URL: https://issues.apache.org/jira/browse/FLINK-34551
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Kumar Mallikarjuna
>Priority: Major
>  Labels: pull-request-available
>
> The retry mechanisms of FutureUtils include quite a bit of redundant code 
> which makes it hard to understand and to extend. The logic should be aligned 
> properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34551) Align retry mechanisms of FutureUtils

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-34551:
-

Assignee: Matthias Pohl  (was: Kumar Mallikarjuna)

> Align retry mechanisms of FutureUtils
> -
>
> Key: FLINK-34551
> URL: https://issues.apache.org/jira/browse/FLINK-34551
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> The retry mechanisms of FutureUtils include quite a bit of redundant code 
> which makes it hard to understand and to extend. The logic should be aligned 
> properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


zentol commented on code in PR #24562:
URL: https://github.com/apache/flink/pull/24562#discussion_r1542463838


##
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterServiceLeadershipRunnerTest.java:
##
@@ -499,11 +499,33 @@ void 
testResultFutureCompletionOfOutdatedLeaderIsIgnored() throws Exception {
 
 leaderElection.notLeader();
 
+assertThat(jobManagerRunner.getResultFuture())
+.as("The runner result should not be completed by the 
leadership revocation.")
+.isNotDone();
+
 resultFuture.complete(
 JobManagerRunnerResult.forSuccess(
 createFailedExecutionGraphInfo(new 
FlinkException("test exception";
 
-assertThatFuture(jobManagerRunner.getResultFuture()).eventuallyFails();
+assertThat(jobManagerRunner.getResultFuture())
+.as("The runner result should be completed if the leadership 
is lost.")
+.isNotDone();

Review Comment:
   the message doesnt match the assertion



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34937) Apache Infra GHA policy update

2024-03-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831659#comment-17831659
 ] 

Matthias Pohl commented on FLINK-34937:
---

let's check https://github.com/assignUser/stash (which is provided by 
[~assignuser] from the Apache Arrow project and promoted in Apache Infra's 
roundtable group) whether our CI can benefit from it

> Apache Infra GHA policy update
> --
>
> Key: FLINK-34937
> URL: https://issues.apache.org/jira/browse/FLINK-34937
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Priority: Major
>
> There is a policy update [announced in the infra 
> ML|https://www.mail-archive.com/jdo-dev@db.apache.org/msg13638.html] which 
> asked Apache projects to limit the number of runners per job. Additionally, 
> the [GHA policy|https://infra.apache.org/github-actions-policy.html] is 
> referenced which I wasn't aware of when working on the action workflow.
> This issue is about applying the policy to the Flink GHA workflows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34548][API] Introduce DataStream, Partitioning and ProcessFunction [flink]

2024-03-28 Thread via GitHub


reswqa commented on code in PR #24422:
URL: https://github.com/apache/flink/pull/24422#discussion_r1542471491


##
flink-datastream-api/src/main/java/org/apache/flink/datastream/api/ExecutionEnvironment.java:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.datastream.api;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.RuntimeExecutionMode;
+
+/**
+ * This is the context in which a program is executed.
+ *
+ * The environment provides methods to create a DataStream and control the 
job execution.
+ */
+@Experimental
+public interface ExecutionEnvironment {
+/**
+ * Get the execution environment instance.
+ *
+ * @return A {@link ExecutionEnvironment} instance.
+ */
+static ExecutionEnvironment getInstance() throws 
ReflectiveOperationException {

Review Comment:
   By the previous review comments, we renamed it and I will update FLIP.
   
   see: https://github.com/apache/flink/pull/24422#discussion_r1533208773
   
   



##
flink-datastream-api/src/main/java/org/apache/flink/datastream/api/ExecutionEnvironment.java:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.datastream.api;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.RuntimeExecutionMode;
+
+/**
+ * This is the context in which a program is executed.
+ *
+ * The environment provides methods to create a DataStream and control the 
job execution.
+ */
+@Experimental
+public interface ExecutionEnvironment {
+/**
+ * Get the execution environment instance.
+ *
+ * @return A {@link ExecutionEnvironment} instance.
+ */
+static ExecutionEnvironment getInstance() throws 
ReflectiveOperationException {

Review Comment:
   By the previous review comments, we have renamed it and I will update FLIP.
   
   see: https://github.com/apache/flink/pull/24422#discussion_r1533208773
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34548][API] Introduce DataStream, Partitioning and ProcessFunction [flink]

2024-03-28 Thread via GitHub


reswqa commented on code in PR #24422:
URL: https://github.com/apache/flink/pull/24422#discussion_r1542471491


##
flink-datastream-api/src/main/java/org/apache/flink/datastream/api/ExecutionEnvironment.java:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.datastream.api;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.api.common.RuntimeExecutionMode;
+
+/**
+ * This is the context in which a program is executed.
+ *
+ * The environment provides methods to create a DataStream and control the 
job execution.
+ */
+@Experimental
+public interface ExecutionEnvironment {
+/**
+ * Get the execution environment instance.
+ *
+ * @return A {@link ExecutionEnvironment} instance.
+ */
+static ExecutionEnvironment getInstance() throws 
ReflectiveOperationException {

Review Comment:
   By the previous review comments, we have renamed it and I will update the 
FLIP later.
   
   see: https://github.com/apache/flink/pull/24422#discussion_r1533208773
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34548][API] Introduce DataStream, Partitioning and ProcessFunction [flink]

2024-03-28 Thread via GitHub


reswqa commented on code in PR #24422:
URL: https://github.com/apache/flink/pull/24422#discussion_r1542480801


##
flink-core/src/main/java/org/apache/flink/api/connector/dsv2/DataStreamV2SourceUtils.java:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.connector.dsv2;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.util.Preconditions;
+
+import java.util.Collection;
+
+/** Utils to create the DataStream V2 supported {@link Source}. */
+@Experimental
+public final class DataStreamV2SourceUtils {
+/**
+ * Wrap a FLIP-27 based source to a DataStream V2 supported source.
+ *
+ * @param source The FLIP-27 based source to wrap.
+ * @return The DataStream V2 supported source.
+ */
+public static  Source wrapSource(

Review Comment:
   It is a only user-facing API that is not called by other functions. But yes, 
I think I should introduce a unit test for this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


XComp opened a new pull request, #24584:
URL: https://github.com/apache/flink/pull/24584

   1.19 backport PR for parent PR #24562


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


XComp opened a new pull request, #24585:
URL: https://github.com/apache/flink/pull/24585

   1.18 backport PR for parent PR #24562


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.19][FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24584:
URL: https://github.com/apache/flink/pull/24584#issuecomment-2024638695

   
   ## CI report:
   
   * 7d9a3a31f248613df9145dbf735dce2bddc55294 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.18][FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24585:
URL: https://github.com/apache/flink/pull/24585#issuecomment-2024638900

   
   ## CI report:
   
   * 04d4618a9eb38470e4edf69a0054e13e83aab700 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34953) Add github ci for flink-web to auto commit build files

2024-03-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831665#comment-17831665
 ] 

Matthias Pohl commented on FLINK-34953:
---

I guess we could do it. The [GitHub Actions 
Policy|https://infra.apache.org/github-actions-policy.html] excludes 
non-released artifacts like website from the restriction:
{quote}Automated services such as GitHub Actions (and Jenkins, BuildBot, etc.) 
MAY work on website content and other non-released data such as documentation 
and convenience binaries. Automated services MUST NOT push data to a repository 
or branch that is subject to official release as a software package by the 
project, unless the project secures specific prior authorization of the 
workflow from Infrastructure.
{quote}
Not sure whether they updated that one recently. Or do you have another source 
which is stricter, [~martijnvisser] ?

> Add github ci for flink-web to auto commit build files
> --
>
> Key: FLINK-34953
> URL: https://issues.apache.org/jira/browse/FLINK-34953
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: website
>
> Currently, https://github.com/apache/flink-web commit build files by local 
> build. So I want use github ci to build docs and commit.
>  
> Changes:
>  * Add website build check for pr
>  * Auto build and commit build files after pr was merged to `asf-site`
>  * Optinal: this ci can triggered by manual



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34957][autoscaler] Event handler records the exception stack trace when exception message is null [flink-kubernetes-operator]

2024-03-28 Thread via GitHub


gyfora commented on code in PR #808:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/808#discussion_r1542518890


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/event/AutoScalerEventHandler.java:
##
@@ -63,6 +65,19 @@ void handleEvent(
 @Nullable String messageKey,
 @Nullable Duration interval);
 
+/**
+ * Handle exception, and the exception event is warning type and don't 
deduplicate by default.
+ */
+default void handleException(Context context, String reason, Throwable e) {
+var message = e.getMessage();
+if (message == null) {
+var stream = new ByteArrayOutputStream();
+e.printStackTrace(new PrintStream(stream));
+message = stream.toString();

Review Comment:
   Can be replaced with `ExceptionUtils.getStackTrace(e)` from apache commons  
or ExceptionUtis.stringifyException from flink utils



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34957][autoscaler] Event handler records the exception stack trace when exception message is null [flink-kubernetes-operator]

2024-03-28 Thread via GitHub


gyfora commented on code in PR #808:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/808#discussion_r1542520539


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/event/AutoScalerEventHandler.java:
##
@@ -63,6 +65,19 @@ void handleEvent(
 @Nullable String messageKey,
 @Nullable Duration interval);
 
+/**
+ * Handle exception, and the exception event is warning type and don't 
deduplicate by default.
+ */
+default void handleException(Context context, String reason, Throwable e) {
+var message = e.getMessage();
+if (message == null) {
+var stream = new ByteArrayOutputStream();
+e.printStackTrace(new PrintStream(stream));
+message = stream.toString();

Review Comment:
   we should also set some reasonable (relatively small) limit on the string 
size here otherwise we might get an error when inserting the Kube event / db 
column



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Change old com.ververica dependency to flink [flink-cdc]

2024-03-28 Thread via GitHub


xleoken commented on PR #3110:
URL: https://github.com/apache/flink-cdc/pull/3110#issuecomment-2024715251

   Thanks @PatrickRen.
   
   > Could you open another PR to backport the patch to release-3.0 branch? 
Thanks
   
   We can not chang the pom.xml files in release-3.0 branch directly.
   
   
![image](https://github.com/apache/flink-cdc/assets/95013770/23ca4c37-0e65-482f-afe7-a7ad243f3059)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34957][autoscaler] Event handler records the exception stack trace when exception message is null [flink-kubernetes-operator]

2024-03-28 Thread via GitHub


1996fanrui commented on code in PR #808:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/808#discussion_r1542554907


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/event/AutoScalerEventHandler.java:
##
@@ -63,6 +65,19 @@ void handleEvent(
 @Nullable String messageKey,
 @Nullable Duration interval);
 
+/**
+ * Handle exception, and the exception event is warning type and don't 
deduplicate by default.
+ */
+default void handleException(Context context, String reason, Throwable e) {
+var message = e.getMessage();
+if (message == null) {
+var stream = new ByteArrayOutputStream();
+e.printStackTrace(new PrintStream(stream));
+message = stream.toString();

Review Comment:
   Thanks @gyfora for the quick review and good suggestion!
   
   > Can be replaced with ExceptionUtils.getStackTrace(e) from apache commons 
or ExceptionUtis.stringifyException from flink utils
   
   Done~
   
   > we should also set some reasonable (relatively small) limit on the string 
size here otherwise we might get an error when inserting the Kube event / db 
column
   
   I saw the default value of 
`kubernetes.operator.exception.stacktrace.max.length` is 2048, so I use it here.
   
   And I didn't extract an autoscaler option for it unless it's needed in the 
future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34959) Update old flink-cdc-connectors artifactId

2024-03-28 Thread xleoken (Jira)
xleoken created FLINK-34959:
---

 Summary: Update old flink-cdc-connectors artifactId
 Key: FLINK-34959
 URL: https://issues.apache.org/jira/browse/FLINK-34959
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: xleoken






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34953) Add github ci for flink-web to auto commit build files

2024-03-28 Thread Zhongqiang Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831682#comment-17831682
 ] 

Zhongqiang Gong commented on FLINK-34953:
-

I have a stupid question:Can we use Rsync Deployments Action to sync website 
content instead of maintain in repo, just like flink document website? 

> Add github ci for flink-web to auto commit build files
> --
>
> Key: FLINK-34953
> URL: https://issues.apache.org/jira/browse/FLINK-34953
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: website
>
> Currently, https://github.com/apache/flink-web commit build files by local 
> build. So I want use github ci to build docs and commit.
>  
> Changes:
>  * Add website build check for pr
>  * Auto build and commit build files after pr was merged to `asf-site`
>  * Optinal: this ci can triggered by manual



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34959) Update old flink-cdc-connectors artifactId

2024-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34959:
---
Labels: pull-request-available  (was: )

> Update old flink-cdc-connectors artifactId
> --
>
> Key: FLINK-34959
> URL: https://issues.apache.org/jira/browse/FLINK-34959
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: xleoken
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Document address update [flink-cdc]

2024-03-28 Thread via GitHub


bright-zy closed pull request #3154: Document address update
URL: https://github.com/apache/flink-cdc/pull/3154


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34953) Add github ci for flink-web to auto commit build files

2024-03-28 Thread Zhongqiang Gong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831682#comment-17831682
 ] 

Zhongqiang Gong edited comment on FLINK-34953 at 3/28/24 9:30 AM:
--

I have a stupid question:Can we build website by ci and use Rsync Deployments 
Action to sync website content instead of maintain in repo, just like flink 
document website? 


was (Author: JIRAUSER301076):
I have a stupid question:Can we use Rsync Deployments Action to sync website 
content instead of maintain in repo, just like flink document website? 

> Add github ci for flink-web to auto commit build files
> --
>
> Key: FLINK-34953
> URL: https://issues.apache.org/jira/browse/FLINK-34953
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: website
>
> Currently, https://github.com/apache/flink-web commit build files by local 
> build. So I want use github ci to build docs and commit.
>  
> Changes:
>  * Add website build check for pr
>  * Auto build and commit build files after pr was merged to `asf-site`
>  * Optinal: this ci can triggered by manual



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Change old com.ververica dependency to flink [flink-cdc]

2024-03-28 Thread via GitHub


PatrickRen commented on PR #3110:
URL: https://github.com/apache/flink-cdc/pull/3110#issuecomment-2024766688

   @xleoken Ah my mistake. We don't need to backport this one to release-3.0 as 
3.1 will be the first version after the donation. Thanks anyway!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34953) Add github ci for flink-web to auto commit build files

2024-03-28 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831693#comment-17831693
 ] 

Martijn Visser commented on FLINK-34953:


[~mapohl] I guess our project-website isn't part of the official release as a 
software package, so that could open up opportunities

> Add github ci for flink-web to auto commit build files
> --
>
> Key: FLINK-34953
> URL: https://issues.apache.org/jira/browse/FLINK-34953
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: website
>
> Currently, https://github.com/apache/flink-web commit build files by local 
> build. So I want use github ci to build docs and commit.
>  
> Changes:
>  * Add website build check for pr
>  * Auto build and commit build files after pr was merged to `asf-site`
>  * Optinal: this ci can triggered by manual



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34954][core] Kryo Input bug fix [flink]

2024-03-28 Thread via GitHub


qinghui-xu opened a new pull request, #24586:
URL: https://github.com/apache/flink/pull/24586

   Handle edge case of zero length serialized bytes correctly.
   
   
   
   ## What is the purpose of the change
   
   Bug fix in kryo (NoFetching)Input implementation to handle properly zero 
length serialized bytes, eg the serialization of a protobuf message with 
default values.
   
   
   ## Brief change log
   
 - Fix while loop for `NoFetchingInput#read(byte[], int, int)` and 
`NoFetchingInput#require(int)`
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: yes
 - The runtime per-record code paths (performance sensitive): yes
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34954) Kryo input implementation NoFetchingInput fails to handle zero length bytes

2024-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34954:
---
Labels: pull-request-available  (was: )

> Kryo input implementation NoFetchingInput fails to handle zero length bytes
> ---
>
> Key: FLINK-34954
> URL: https://issues.apache.org/jira/browse/FLINK-34954
> Project: Flink
>  Issue Type: Bug
>Reporter: Qinghui Xu
>Priority: Major
>  Labels: pull-request-available
>
> If the serailized bytes are empty, `NoFetchingInput` will run into error when 
> Kryo tries to deserialize it.
> Example: a protobuf 3 object that contains only default values will be 
> serialized as 0 length byte array, and the deserialization later will fail. 
> Illustration:
> {noformat}
> import com.esotericsoftware.kryo.Kryo
> import com.esotericsoftware.kryo.io.{ByteBufferInput, ByteBufferOutput, 
> Input, Output}
> import com.google.protobuf.{DescriptorProtos, Message}import 
> com.twitter.chill.protobuf.ProtobufSerializer
> import org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
> import java.io.ByteArrayInputStream
>  
> object ProtoSerializationTest {
>   def main(args: Array[String]) = {     
> val chillProtoSerializer = new ProtobufSerializer
>     val protomessage = DescriptorProtos.DescriptorProto.getDefaultInstance
>     val output: Output = new ByteBufferOutput(1000)
>     chillProtoSerializer.write(null, output, protomessage)
>     val serialized: Array[Byte] = output.toBytes
>     println(s"Serialized : $serialized")
>     val input: Input = new NoFetchingInput(new 
> ByteArrayInputStream(serialized))
>     val deserialized = chillProtoSerializer.read(null, input, 
> classOf[BillableClick].asInstanceOf[Class[Message]])
>     println(deserialized)
>   }
> }
> {noformat}
>  
> Error
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: Could not create class 
> com.criteo.glup.BillableClickProto$BillableClick
>     at 
> com.twitter.chill.protobuf.ProtobufSerializer.read(ProtobufSerializer.java:76)
>     at 
> com.criteo.streaming.common.bootstrap.ProtoSerialization$.main(ProtoSerialization.scala:22)
>     at ProtoSerialization.main(ProtoSerialization.scala)
> Caused by: com.esotericsoftware.kryo.KryoException: java.io.EOFException: No 
> more bytes left.
>     at 
> org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.readBytes(NoFetchingInput.java:128)
>     at com.esotericsoftware.kryo.io.Input.readBytes(Input.java:332)
>     at 
> com.twitter.chill.protobuf.ProtobufSerializer.read(ProtobufSerializer.java:73)
>     ... 2 more
> Caused by: java.io.EOFException: No more bytes left.
>     ... 5 more{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [build] fix inconsistent Kafka shading among cdc connectors [flink-cdc]

2024-03-28 Thread via GitHub


PatrickRen merged PR #2988:
URL: https://github.com/apache/flink-cdc/pull/2988


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34954][core] Kryo Input bug fix [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24586:
URL: https://github.com/apache/flink/pull/24586#issuecomment-2024784567

   
   ## CI report:
   
   * a37e562b54ed9ad4b9290f2f999542ea9104c65f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [cdc-cli]Add support for both new and old Flink config files in Flink… [flink-cdc]

2024-03-28 Thread via GitHub


PatrickRen commented on PR #3194:
URL: https://github.com/apache/flink-cdc/pull/3194#issuecomment-2024785477

   @skymilong Welcome to the community! Feel free to ask any questions


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [pipeline-connector][paimon] add paimon pipeline data sink connector. [flink-cdc]

2024-03-28 Thread via GitHub


PatrickRen commented on PR #2916:
URL: https://github.com/apache/flink-cdc/pull/2916#issuecomment-2024788681

   > paimon latest version is 0.7,we should update paimon version from 0.6 to 
0.7
   
   @lvyanquan Could you take a look at this one? I prefer to catch up with the 
latest version as well. Also could you rebase the latest master? Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34953) Add github ci for flink-web to auto commit build files

2024-03-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831665#comment-17831665
 ] 

Matthias Pohl edited comment on FLINK-34953 at 3/28/24 9:52 AM:


I guess we could do it. The [GitHub Actions 
Policy|https://infra.apache.org/github-actions-policy.html] excludes 
non-released artifacts like websites from the restriction:
{quote}Automated services such as GitHub Actions (and Jenkins, BuildBot, etc.) 
MAY work on website content and other non-released data such as documentation 
and convenience binaries. Automated services MUST NOT push data to a repository 
or branch that is subject to official release as a software package by the 
project, unless the project secures specific prior authorization of the 
workflow from Infrastructure.
{quote}
Not sure whether they updated that one recently. Or do you have another source 
which is stricter, [~martijnvisser] ?


was (Author: mapohl):
I guess we could do it. The [GitHub Actions 
Policy|https://infra.apache.org/github-actions-policy.html] excludes 
non-released artifacts like website from the restriction:
{quote}Automated services such as GitHub Actions (and Jenkins, BuildBot, etc.) 
MAY work on website content and other non-released data such as documentation 
and convenience binaries. Automated services MUST NOT push data to a repository 
or branch that is subject to official release as a software package by the 
project, unless the project secures specific prior authorization of the 
workflow from Infrastructure.
{quote}
Not sure whether they updated that one recently. Or do you have another source 
which is stricter, [~martijnvisser] ?

> Add github ci for flink-web to auto commit build files
> --
>
> Key: FLINK-34953
> URL: https://issues.apache.org/jira/browse/FLINK-34953
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: website
>
> Currently, https://github.com/apache/flink-web commit build files by local 
> build. So I want use github ci to build docs and commit.
>  
> Changes:
>  * Add website build check for pr
>  * Auto build and commit build files after pr was merged to `asf-site`
>  * Optinal: this ci can triggered by manual



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-34953) Add github ci for flink-web to auto commit build files

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reopened FLINK-34953:
---

> Add github ci for flink-web to auto commit build files
> --
>
> Key: FLINK-34953
> URL: https://issues.apache.org/jira/browse/FLINK-34953
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: website
>
> Currently, https://github.com/apache/flink-web commit build files by local 
> build. So I want use github ci to build docs and commit.
>  
> Changes:
>  * Add website build check for pr
>  * Auto build and commit build files after pr was merged to `asf-site`
>  * Optinal: this ci can triggered by manual



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33376][coordination] Extend ZooKeeper Curator configurations [flink]

2024-03-28 Thread via GitHub


XComp merged PR #24563:
URL: https://github.com/apache/flink/pull/24563


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33376) Extend Curator config option for Zookeeper configuration

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-33376:
--
Release Note: Adds support for the following curator parameters: 
high-availability.zookeeper.client.authorization (corresponding curator 
parameter: authorization), high-availability.zookeeper.client.max-close-wait 
(corresponding curator parameter: maxCloseWaitMs), 
high-availability.zookeeper.client.simulated-session-expiration-percent 
(corresponding curator parameter: simulatedSessionExpirationPercent).  (was: 
Adds support for the following curator parameters: 
high-availability.zookeeper.client.authorization (curator parameter: 
authorization), high-availability.zookeeper.client.max-close-wait (curator 
parameter: maxCloseWaitMs), 
high-availability.zookeeper.client.simulated-session-expiration-percent 
(curator parameter: simulatedSessionExpirationPercent))

> Extend Curator config option for Zookeeper configuration
> 
>
> Key: FLINK-33376
> URL: https://issues.apache.org/jira/browse/FLINK-33376
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Oleksandr Nitavskyi
>Assignee: Oleksandr Nitavskyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> In certain cases ZooKeeper requires additional Authentication information. 
> For example list of valid [names for 
> ensemble|https://zookeeper.apache.org/doc/r3.8.0/zookeeperAdmin.html#:~:text=for%20secure%20authentication.-,zookeeper.ensembleAuthName,-%3A%20(Java%20system%20property]
>  in order to prevent the accidental connecting to a wrong ensemble.
> Curator allows to add additional AuthInfo object for such configuration. Thus 
> it would be useful to add one more additional Map property which would allow 
> to pass AuthInfo objects during Curator client creation.
> *Acceptance Criteria:* For Flink users it is possible to configure auth info 
> list for Curator framework client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33376) Extend Curator config option for Zookeeper configuration

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-33376.
---
Fix Version/s: 1.20.0
 Release Note: Adds support for the following curator parameters: 
high-availability.zookeeper.client.authorization (curator parameter: 
authorization), high-availability.zookeeper.client.max-close-wait (curator 
parameter: maxCloseWaitMs), 
high-availability.zookeeper.client.simulated-session-expiration-percent 
(curator parameter: simulatedSessionExpirationPercent)
   Resolution: Fixed

master: 
[83f82ab0c865a4fa9e119c96e11e0fb3df4a5ecd|https://github.com/apache/flink/commit/83f82ab0c865a4fa9e119c96e11e0fb3df4a5ecd]

> Extend Curator config option for Zookeeper configuration
> 
>
> Key: FLINK-33376
> URL: https://issues.apache.org/jira/browse/FLINK-33376
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Oleksandr Nitavskyi
>Assignee: Oleksandr Nitavskyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> In certain cases ZooKeeper requires additional Authentication information. 
> For example list of valid [names for 
> ensemble|https://zookeeper.apache.org/doc/r3.8.0/zookeeperAdmin.html#:~:text=for%20secure%20authentication.-,zookeeper.ensembleAuthName,-%3A%20(Java%20system%20property]
>  in order to prevent the accidental connecting to a wrong ensemble.
> Curator allows to add additional AuthInfo object for such configuration. Thus 
> it would be useful to add one more additional Map property which would allow 
> to pass AuthInfo objects during Curator client creation.
> *Acceptance Criteria:* For Flink users it is possible to configure auth info 
> list for Curator framework client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34957][autoscaler] Event handler records the exception stack trace when exception message is null [flink-kubernetes-operator]

2024-03-28 Thread via GitHub


1996fanrui merged PR #808:
URL: https://github.com/apache/flink-kubernetes-operator/pull/808


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34957) JDBC Autoscaler event handler throws Column 'message' cannot be null

2024-03-28 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-34957.
-
Resolution: Fixed

> JDBC Autoscaler event handler throws Column 'message' cannot be null 
> -
>
> Key: FLINK-34957
> URL: https://issues.apache.org/jira/browse/FLINK-34957
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.9.0
>
> Attachments: image-2024-03-28-11-57-35-234.png
>
>
> JDBC Autoscaler event handler doesn't allow the event message is null, but 
> the message may be null when we handle the exception.
> We consider the exception message as the event message, but the exception 
> message may be null, such as: TimeoutException. (It has been shown in 
> following picture.)
> Also, recording a event without any message is meaningless. It doesn't have 
> any benefit for troubleshooting.
> Solution: 
> * Consider the exception message as the event message when exception message 
> isn't null
> * The whole Exception as the event message if exception message is null.
>  !image-2024-03-28-11-57-35-234.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34957) JDBC Autoscaler event handler throws Column 'message' cannot be null

2024-03-28 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831712#comment-17831712
 ] 

Rui Fan commented on FLINK-34957:
-

Merged master(1.9.0) via: 9119d73a904bc5a5eb675c2edffe8f8de8ed8ef2

> JDBC Autoscaler event handler throws Column 'message' cannot be null 
> -
>
> Key: FLINK-34957
> URL: https://issues.apache.org/jira/browse/FLINK-34957
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.9.0
>
> Attachments: image-2024-03-28-11-57-35-234.png
>
>
> JDBC Autoscaler event handler doesn't allow the event message is null, but 
> the message may be null when we handle the exception.
> We consider the exception message as the event message, but the exception 
> message may be null, such as: TimeoutException. (It has been shown in 
> following picture.)
> Also, recording a event without any message is meaningless. It doesn't have 
> any benefit for troubleshooting.
> Solution: 
> * Consider the exception message as the event message when exception message 
> isn't null
> * The whole Exception as the event message if exception message is null.
>  !image-2024-03-28-11-57-35-234.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34551) Align retry mechanisms of FutureUtils

2024-03-28 Thread Kumar Mallikarjuna (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831728#comment-17831728
 ] 

Kumar Mallikarjuna commented on FLINK-34551:


I see, makes sense. Thank you.

> Align retry mechanisms of FutureUtils
> -
>
> Key: FLINK-34551
> URL: https://issues.apache.org/jira/browse/FLINK-34551
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Core
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> The retry mechanisms of FutureUtils include quite a bit of redundant code 
> which makes it hard to understand and to extend. The logic should be aligned 
> properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34239][core] Add copy() method for SerializerConfig [flink]

2024-03-28 Thread via GitHub


X-czh commented on code in PR #24544:
URL: https://github.com/apache/flink/pull/24544#discussion_r1542824003


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/DataTypeFactoryImpl.java:
##
@@ -132,49 +131,12 @@ public LogicalType createLogicalType(UnresolvedIdentifier 
identifier) {
 private static Supplier createSerializerConfig(
 ClassLoader classLoader, ReadableConfig config, SerializerConfig 
serializerConfig) {
 return () -> {
-final SerializerConfig newSerializerConfig = new 
SerializerConfigImpl();
-
+SerializerConfig newSerializerConfig = new SerializerConfigImpl();

Review Comment:
   We can avoid unnecessary object creation by creating a new instance only 
when serializerConfig is null



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34960) NullPointerException while applying parallelism overrides for session jobs

2024-03-28 Thread Kunal Rohitas (Jira)
Kunal Rohitas created FLINK-34960:
-

 Summary: NullPointerException while applying parallelism overrides 
for session jobs
 Key: FLINK-34960
 URL: https://issues.apache.org/jira/browse/FLINK-34960
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: 1.8.0
Reporter: Kunal Rohitas


While using the autoscaler for session jobs, the operator throws a 
NullPointerException while trying to apply parallelism overrides, though it's 
able to generate parallelism suggestion report for scaling. The versions used 
here are flink-1.18.1 and flink-kubernetes-operator-1.8.0. 


{code:java}
2024-03-26 08:41:21,617 o.a.f.a.JobAutoScalerImpl 
[ERROR][default/clientsession-job] Error applying overrides. 
java.lang.NullPointerException at 
org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:52)
 at 
org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:40)
 at 
org.apache.flink.autoscaler.JobAutoScalerImpl.applyParallelismOverrides(JobAutoScalerImpl.java:161)
 at 
org.apache.flink.autoscaler.JobAutoScalerImpl.scale(JobAutoScalerImpl.java:111) 
at 
org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.applyAutoscaler(AbstractFlinkResourceReconciler.java:192)
 at 
org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.reconcile(AbstractFlinkResourceReconciler.java:139)
 at 
org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:116)
 at 
org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:53)
 at 
io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:152)
 at 
io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:110)
 at 
org.apache.flink.kubernetes.operator.metrics.OperatorJosdkMetrics.timeControllerExecution(OperatorJosdkMetrics.java:80)
 at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:109)
 at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:140)
 at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
 at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
 at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
 at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:417)
 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
at java.base/java.lang.Thread.run(Unknown Source){code}
 
{code:java}
2024-03-26 08:41:21,617 o.a.f.a.JobAutoScalerImpl 
[ERROR][default/clientsession-job] Error while scaling job 
java.lang.NullPointerException at 
org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:52)
 at 
org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:40)
 at 
org.apache.flink.autoscaler.JobAutoScalerImpl.applyParallelismOverrides(JobAutoScalerImpl.java:161)
 at 
org.apache.flink.autoscaler.JobAutoScalerImpl.scale(JobAutoScalerImpl.java:111) 
at 
org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.applyAutoscaler(AbstractFlinkResourceReconciler.java:192)
 at 
org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.reconcile(AbstractFlinkResourceReconciler.java:139)
 at 
org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:116)
 at 
org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:53)
 at 
io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:152)
 at 
io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:110)
 at 
org.apache.flink.kubernetes.operator.metrics.OperatorJosdkMetrics.timeControllerExecution(OperatorJosdkMetrics.java:80)
 at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:109)
 at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:140)
 at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
 at 
io.javaoperatorsdk.operator.processing.event.Recon

Re: [PR] [FLINK-32711][planner] Fix the type mismatch when proctime() used as … [flink]

2024-03-28 Thread via GitHub


vahmed-hamdy commented on PR #23107:
URL: https://github.com/apache/flink/pull/23107#issuecomment-2025033887

   Thanks for the contribution! I have tested it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34922][rest] Support concurrent global failure [flink]

2024-03-28 Thread via GitHub


zentol merged PR #24573:
URL: https://github.com/apache/flink/pull/24573


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831767#comment-17831767
 ] 

Chesnay Schepler commented on FLINK-34922:
--

master:
dc957bfdc3aa6a8e3bce603cfc68c5c553c72220
f4c945cb9ca882ae485c2e58c74825938f154119

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-34922:
-
Fix Version/s: 1.18.2
   1.20.0
   1.19.1

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34239][core] Add copy() method for SerializerConfig [flink]

2024-03-28 Thread via GitHub


X-czh commented on code in PR #24544:
URL: https://github.com/apache/flink/pull/24544#discussion_r1542914373


##
flink-core/src/main/java/org/apache/flink/api/common/serialization/SerializerConfigImpl.java:
##
@@ -574,4 +575,31 @@ private void registerTypeWithTypeInfoFactory(
 public ExecutionConfig getExecutionConfig() {
 return executionConfig;
 }
+
+@Override
+public SerializerConfigImpl copy() {
+final SerializerConfigImpl newSerializerConfig = new 
SerializerConfigImpl();
+newSerializerConfig.configure(configuration, 
this.getClass().getClassLoader());
+
+getRegisteredTypesWithKryoSerializers()
+.forEach(
+(c, s) ->
+
newSerializerConfig.registerTypeWithKryoSerializer(
+c, s.getSerializer()));
+getRegisteredTypesWithKryoSerializerClasses()
+.forEach(newSerializerConfig::registerTypeWithKryoSerializer);
+getDefaultKryoSerializers()
+.forEach(
+(c, s) ->
+
newSerializerConfig.addDefaultKryoSerializer(c, s.getSerializer()));
+Optional.ofNullable(isForceKryoAvroEnabled().getAsBoolean())

Review Comment:
   If I understand it correctly, it is not needed as we've already configured 
it in `newSerializerConfig#configure`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34239][core] Add copy() method for SerializerConfig [flink]

2024-03-28 Thread via GitHub


X-czh commented on PR #24544:
URL: https://github.com/apache/flink/pull/24544#issuecomment-2025118034

   @kumar-mallikarjuna Thanks for the contribution. LGTM except for two minor 
comments, PTAL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.18][FLINK-34922][rest] Support concurrent global failure [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24587:
URL: https://github.com/apache/flink/pull/24587#issuecomment-2025120982

   
   ## CI report:
   
   * 98fb74b2d101b6ea373b139edc26893be14f2c39 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.19][FLINK-34922][rest] Support concurrent global failure [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24588:
URL: https://github.com/apache/flink/pull/24588#issuecomment-2025121174

   
   ## CI report:
   
   * 2736d3c01669ba65150b82a7d758b5e50c8f3cc4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


XComp merged PR #24562:
URL: https://github.com/apache/flink/pull/24562


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.19][FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


XComp merged PR #24584:
URL: https://github.com/apache/flink/pull/24584


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-1.18][FLINK-34933][test] Fixes JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored [flink]

2024-03-28 Thread via GitHub


XComp merged PR #24585:
URL: https://github.com/apache/flink/pull/24585


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34933) JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored isn't implemented properly

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-34933.
---
Fix Version/s: 1.18.2
   1.20.0
   1.19.1
   Resolution: Fixed

master: 
[1668a07276929416469392a35a77ba7699aac30b|https://github.com/apache/flink/commit/1668a07276929416469392a35a77ba7699aac30b]
1.19: 
[c11656a2406f07e2ae7cd6f80c46afb14385ee0e|https://github.com/apache/flink/commit/c11656a2406f07e2ae7cd6f80c46afb14385ee0e]
1.18: 
[94d1363c27e26fc8313721e138c7b4de744ca69e|https://github.com/apache/flink/commit/94d1363c27e26fc8313721e138c7b4de744ca69e]

> JobMasterServiceLeadershipRunnerTest#testResultFutureCompletionOfOutdatedLeaderIsIgnored
>  isn't implemented properly
> ---
>
> Key: FLINK-34933
> URL: https://issues.apache.org/jira/browse/FLINK-34933
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.17.2, 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> {{testResultFutureCompletionOfOutdatedLeaderIsIgnored}} doesn't test the 
> desired behavior: The {{TestingJobMasterService#closeAsync()}} callback 
> throws an {{UnsupportedOperationException}} by default which prevents the 
> test from properly finalizing the leadership revocation.
> The test is still passing because the test checks implicitly for this error. 
> Instead, we should verify that the runner's resultFuture doesn't complete 
> until the runner is closed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34324][ci] Replaces AWS-based S3 e2e tests with Minio-backed version [flink]

2024-03-28 Thread via GitHub


XComp commented on code in PR #24465:
URL: https://github.com/apache/flink/pull/24465#discussion_r1542963271


##
flink-end-to-end-tests/test-scripts/test_file_sink.sh:
##
@@ -79,30 +42,69 @@ 
TEST_PROGRAM_JAR="${END_TO_END_DIR}/flink-file-sink-test/target/FileSinkProgram.
 #   sorted content of part files
 ###
 function get_complete_result {
-  if [ "${OUT_TYPE}" == "s3" ]; then
-s3_get_by_full_path_and_filename_prefix "$OUTPUT_PATH" "$S3_PREFIX" 
"part-" true
-  fi
   find "${OUTPUT_PATH}" -type f \( -iname "part-*" \) -exec cat {} + | sort -g
 }
 
 ###
 # Get total number of lines in part files.
 #
 # Globals:
-#   S3_PREFIX
+#   OUTPUT_PATH
 # Arguments:
 #   None
 # Returns:
 #   line number in part files
 ###
 function get_total_number_of_valid_lines {
-  if [ "${OUT_TYPE}" == "local" ]; then
-get_complete_result | wc -l | tr -d '[:space:]'
-  elif [ "${OUT_TYPE}" == "s3" ]; then
-s3_get_number_of_lines_by_prefix "${S3_PREFIX}" "part-"
-  fi
+  get_complete_result | wc -l | tr -d '[:space:]'
 }
 
+if [ "${OUT_TYPE}" == "local" ]; then
+  echo "[INFO] Test run in local environment: No S3 environment is not loaded."
+elif [ "${OUT_TYPE}" == "s3" ]; then
+  source "$(dirname "$0")"/common_s3_minio.sh
+  s3_setup hadoop
+
+  # overwrites JOB_OUTPUT_PATH to point to S3
+  S3_DATA_PREFIX="${RANDOM_PREFIX}"
+  S3_CHECKPOINT_PREFIX="${RANDOM_PREFIX}-chk"
+  JOB_OUTPUT_PATH="s3://$IT_CASE_S3_BUCKET/${S3_DATA_PREFIX}"
+  set_config_key "state.checkpoints.dir" 
"s3://$IT_CASE_S3_BUCKET/${S3_CHECKPOINT_PREFIX}"
+
+  # overwrites implementation for local runs
+  function get_complete_result {
+# copies the data from S3 to the local OUTPUT_PATH
+s3_get_by_full_path_and_filename_prefix "$OUTPUT_PATH" 
"$FILE_SINK_TEST_TEMP_SUBFOLDER" "part-" true
+
+# and prints the sorted output
+find "${OUTPUT_PATH}" -type f \( -iname "part-*" \) -exec cat {} + | sort 
-g
+  }
+
+  # overwrites implementation for local runs
+  function get_total_number_of_valid_lines {
+s3_get_number_of_lines_by_prefix "${FILE_SINK_TEST_TEMP_SUBFOLDER}" "part-"
+  }
+
+  # make sure we delete the file at the end
+  function out_cleanup {
+s3_delete_by_full_path_prefix "${S3_DATA_PREFIX}"
+s3_delete_by_full_path_prefix "${S3_CHECKPOINT_PREFIX}"
+  }
+
+  on_exit out_cleanup
+else
+  echo "[ERROR] Unknown out type: ${OUT_TYPE}"
+  exit 1
+fi
+
+# randomly set up openSSL with dynamically/statically linked libraries

Review Comment:
   yikes, good catch. That must have been removed accidentally :thinking: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31664] Implement ARRAY_INTERSECT function [flink]

2024-03-28 Thread via GitHub


MartijnVisser commented on PR #24526:
URL: https://github.com/apache/flink/pull/24526#issuecomment-2025213724

   @liuyongvs I disagree: I think that we're looking at what the definition of 
INTERSECT in general is, not from a functional or implementation perspective, 
but more if there's a definition of what INTERSECT should do. I don't think 
it's a good idea to have INTERSECT in Flink that doesn't return duplicates, and 
then have an ARRAY_INTERSECT that does return duplicates. That's not 
consistent. If both INTERSECT and ARRAY_INTERSECT don't return duplicates, that 
is a consistent behavior. 
   
   So IMHO:
   
   INTERSECT and ARRAY_INTERSECT --> Removes duplicates
   
   If there's a need to have duplicates included:
   
   INTERSECT ALL and ARRAY_INTERSECT_ALL --> Keep duplicates, have consistent 
behavior with INTERSECT ALL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34911) ChangelogRecoveryRescaleITCase failed fatally with 127 exit code

2024-03-28 Thread Rui Xia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831795#comment-17831795
 ] 

Rui Xia commented on FLINK-34911:
-

Hello Skraba
I see the system error in the log file:


{code:java}
Inconsistency detected by ld.so: dl-tls.c: 493: _dl_allocate_tls_init: 
Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed!
{code}

I think this problem is not related to Flink. 


> ChangelogRecoveryRescaleITCase failed fatally with 127 exit code
> 
>
> Key: FLINK-34911
> URL: https://issues.apache.org/jira/browse/FLINK-34911
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58455&view=logs&j=a657ddbf-d986-5381-9649-342d9c92e7fb&t=dc085d4a-05c8-580e-06ab-21f5624dab16&l=9029]
>  
> {code:java}
> Mar 21 01:50:42 01:50:42.553 [ERROR] Command was /bin/sh -c cd 
> '/__w/1/s/flink-tests' && '/usr/lib/jvm/jdk-21.0.1+12/bin/java' 
> '-XX:+UseG1GC' '-Xms256m' '-XX:+IgnoreUnrecognizedVMOptions' 
> '--add-opens=java.base/java.util=ALL-UNNAMED' 
> '--add-opens=java.base/java.io=ALL-UNNAMED' '-Xmx1536m' '-jar' 
> '/__w/1/s/flink-tests/target/surefire/surefirebooter-20240321010847189_810.jar'
>  '/__w/1/s/flink-tests/target/surefire' '2024-03-21T01-08-44_720-jvmRun3' 
> 'surefire-20240321010847189_808tmp' 'surefire_207-20240321010847189_809tmp'
> Mar 21 01:50:42 01:50:42.553 [ERROR] Error occurred in starting fork, check 
> output in log
> Mar 21 01:50:42 01:50:42.553 [ERROR] Process Exit Code: 127
> Mar 21 01:50:42 01:50:42.553 [ERROR] Crashed tests:
> Mar 21 01:50:42 01:50:42.553 [ERROR] 
> org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:418)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:297)
> Mar 21 01:50:42 01:50:42.553 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:250)
> Mar 21 01:50:42 01:50:42.554 [ERROR]  at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1240)
> {code}
> From the watchdog, only {{ChangelogRecoveryRescaleITCase}} didn't complete, 
> specifically parameterized with an {{EmbeddedRocksDBStateBackend}} with 
> incremental checkpointing enabled.
> The base class ({{{}ChangelogRecoveryITCaseBase{}}}) starts a 
> {{MiniClusterWithClientResource}}
> {code:java}
> ~/Downloads/CI/logs-cron_jdk21-test_cron_jdk21_tests-1710982836$ cat 
> watchdog| grep "Tests run\|Running org.apache.flink" | grep -o 
> "org.apache.flink[^ ]*$" | sort | uniq -c | sort -n | head
>       1 org.apache.flink.test.checkpointing.ChangelogRecoveryRescaleITCase
>       2 org.apache.flink.api.connector.source.lib.NumberSequenceSourceITCase
> {code}
>  
> {color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33676] Implement RestoreTests for WindowAggregate [flink]

2024-03-28 Thread via GitHub


dawidwys commented on code in PR #23886:
URL: https://github.com/apache/flink/pull/23886#discussion_r1543024707


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateTestPrograms.java:
##
@@ -0,0 +1,528 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.api.config.OptimizerConfigOptions;
+import org.apache.flink.table.planner.utils.AggregatePhaseStrategy;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+import java.util.function.Function;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowAggregate}. */
+public class WindowAggregateTestPrograms {
+
+static final Row[] BEFORE_DATA = {

Review Comment:
   Ok, so why cannot this field be `private` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33676] Implement RestoreTests for WindowAggregate [flink]

2024-03-28 Thread via GitHub


dawidwys commented on code in PR #23886:
URL: https://github.com/apache/flink/pull/23886#discussion_r1543024707


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateTestPrograms.java:
##
@@ -0,0 +1,528 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.api.config.OptimizerConfigOptions;
+import org.apache.flink.table.planner.utils.AggregatePhaseStrategy;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+import java.util.function.Function;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowAggregate}. */
+public class WindowAggregateTestPrograms {
+
+static final Row[] BEFORE_DATA = {

Review Comment:
   Ok, so why cannot `BEFORE_DATA` be `private` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33676] Implement RestoreTests for WindowAggregate [flink]

2024-03-28 Thread via GitHub


dawidwys commented on code in PR #23886:
URL: https://github.com/apache/flink/pull/23886#discussion_r1543023465


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateJsonPlanTest.java:
##
@@ -1,528 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.api.config.OptimizerConfigOptions;
-import 
org.apache.flink.table.planner.plan.utils.JavaUserDefinedAggFunctions.ConcatDistinctAggFunction;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window aggregate. */
-class WindowAggregateJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String insertOnlyTableDdl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(insertOnlyTableDdl);
-
-String changelogTableDdl =
-"CREATE TABLE MyCDCTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values',\n"
-+ " 'changelog-mode' = 'I,UA,UB,D')\n";
-tEnv.executeSql(changelogTableDdl);
-}
-
-@Test
-void testEventTimeTumbleWindow() {
-tEnv.createFunction("concat_distinct_agg", 
ConcatDistinctAggFunction.class);
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " b BIGINT,\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " cnt BIGINT,\n"
-+ " sum_a INT,\n"
-+ " distinct_cnt BIGINT,\n"
-+ " concat_distinct STRING\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  b,\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  COUNT(*),\n"
-+ "  SUM(a),\n"
-+ "  COUNT(DISTINCT c),\n"
-+ "  concat_distinct_agg(c)\n"
-+ "FROM TABLE(\n"
-+ "   TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), 
INTERVAL '5' SECOND))\n"
-+ "GROUP BY b, window_start, window_end");
-}
-
-@Test
-void testEventTimeTumbleWindowWithCDCSource() {
-tEnv.createFunction("concat_distinct_agg", 
ConcatDistinctAggFunction.class);
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " b BIGINT,\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " cnt BIGINT,\n"
-+ " sum_a INT,\n"
-  

Re: [PR] [FLINK-33676] Implement RestoreTests for WindowAggregate [flink]

2024-03-28 Thread via GitHub


dawidwys commented on code in PR #23886:
URL: https://github.com/apache/flink/pull/23886#discussion_r1543029050


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateTestPrograms.java:
##
@@ -0,0 +1,528 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.api.config.OptimizerConfigOptions;
+import org.apache.flink.table.planner.utils.AggregatePhaseStrategy;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+import java.util.function.Function;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowAggregate}. */
+public class WindowAggregateTestPrograms {
+
+static final Row[] BEFORE_DATA = {

Review Comment:
   Have you actually checked the modifiers? The majority of fields are still in 
the `default` scope. I don't see a reason why they could not be private.
   
   There are still `public` fields that I believe could be `private`, e.g. 
`TUMBLE_EVENT_TIME_AFTER_ROWS`. Or is it used elsewhere?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33676] Implement RestoreTests for WindowAggregate [flink]

2024-03-28 Thread via GitHub


dawidwys commented on code in PR #23886:
URL: https://github.com/apache/flink/pull/23886#discussion_r1543024707


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateTestPrograms.java:
##
@@ -0,0 +1,528 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.api.config.OptimizerConfigOptions;
+import org.apache.flink.table.planner.utils.AggregatePhaseStrategy;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+import java.util.function.Function;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowAggregate}. */
+public class WindowAggregateTestPrograms {
+
+static final Row[] BEFORE_DATA = {

Review Comment:
   Ok, so why cannot `BEFORE_DATA` be `private` ?



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateTestPrograms.java:
##
@@ -0,0 +1,528 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.api.config.OptimizerConfigOptions;
+import org.apache.flink.table.planner.utils.AggregatePhaseStrategy;
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+import java.util.function.Function;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowAggregate}. */
+public class WindowAggregateTestPrograms {
+
+static final Row[] BEFORE_DATA = {

Review Comment:
   Have you actually checked the modifiers? The majority of fields are still in 
the `default` scope. I don't see a reason why they could not be private.
   
   There are still `public` fields that I believe could be `private`, e.g. 
`TUMBLE_EVENT_TIME_AFTER_ROWS`. Or is it used elsewhere?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34517][table]fix environment configs ignored when calling procedure operation [flink]

2024-03-28 Thread via GitHub


JustinLeesin commented on PR #24397:
URL: https://github.com/apache/flink/pull/24397#issuecomment-2025255577

   > @JustinLeesin Could you please cherry pick it to release-1.19 branch? And 
if ci passed, please let me know
   
   Sorry to reply so late,  I will work on it recently.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34950) Disable spotless on Java 21 for connector-shared-utils

2024-03-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34950.
-
Fix Version/s: connector-parent-1.2.0
   Resolution: Fixed

> Disable spotless on Java 21 for connector-shared-utils
> --
>
> Key: FLINK-34950
> URL: https://issues.apache.org/jira/browse/FLINK-34950
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Affects Versions: connector-parent-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.2.0
>
>
> after https://github.com/apache/flink-connector-shared-utils/pull/19
> spotless was stopped being skipped for java17+ in parent pom
> however we still need to skip it for java21+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34950) Disable spotless on Java 21 for connector-shared-utils

2024-03-28 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34950.
---

> Disable spotless on Java 21 for connector-shared-utils
> --
>
> Key: FLINK-34950
> URL: https://issues.apache.org/jira/browse/FLINK-34950
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent
>Affects Versions: connector-parent-1.1.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: connector-parent-1.2.0
>
>
> after https://github.com/apache/flink-connector-shared-utils/pull/19
> spotless was stopped being skipped for java17+ in parent pom
> however we still need to skip it for java21+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33805] Implement restore tests for OverAggregate node [flink]

2024-03-28 Thread via GitHub


dawidwys closed pull request #24565: [FLINK-33805] Implement restore tests for 
OverAggregate node
URL: https://github.com/apache/flink/pull/24565


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33805) Implement restore tests for OverAggregate node

2024-03-28 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33805.

Fix Version/s: 1.20.0
   Resolution: Fixed

Implemented in 
b3334d1527aab6c196752b63c3139ff5529598cc..bf60c8813598d3119375cec057930240642699d4

> Implement restore tests for OverAggregate node
> --
>
> Key: FLINK-33805
> URL: https://issues.apache.org/jira/browse/FLINK-33805
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-03-28 Thread via GitHub


snuyanzin commented on code in PR #24280:
URL: https://github.com/apache/flink/pull/24280#discussion_r1543058071


##
flink-end-to-end-tests/flink-sql-client-test/src/main/java/org/apache/flink/table/toolbox/TestScanTableSourceWithWatermarkPushDown.java:
##
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.toolbox;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.connector.source.SourceFunctionProvider;
+import 
org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown;
+import org.apache.flink.table.data.RowData;
+
+/**
+ * A source used to test {@link SupportsWatermarkPushDown}.
+ *
+ * For simplicity, the deprecated source function method is used to create 
the source.
+ */
+@SuppressWarnings("deprecation")

Review Comment:
   ```suggestion
   ```
   Looks we don't need it here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-03-28 Thread via GitHub


snuyanzin commented on code in PR #24280:
URL: https://github.com/apache/flink/pull/24280#discussion_r1543059276


##
flink-end-to-end-tests/flink-sql-client-test/src/main/java/org/apache/flink/table/toolbox/TestScanTableSourceWithWatermarkPushDownFactory.java:
##
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.toolbox;
+
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.factories.DynamicTableSourceFactory;
+
+import java.util.Collections;
+import java.util.Set;
+
+/** A factory to create {@link TestScanTableSourceWithWatermarkPushDown}. */
+public class TestScanTableSourceWithWatermarkPushDownFactory implements 
DynamicTableSourceFactory {
+
+public static final String IDENTIFIER = 
"test-scan-table-source-with-watermark-push-down";
+
+@Override
+public DynamicTableSource createDynamicTableSource(Context context) {
+

Review Comment:
   ```suggestion
   ```
   nit: remove empty line



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-03-28 Thread via GitHub


snuyanzin commented on code in PR #24280:
URL: https://github.com/apache/flink/pull/24280#discussion_r1543065577


##
flink-end-to-end-tests/flink-sql-client-test/src/main/java/org/apache/flink/table/toolbox/TestScanTableSourceWithWatermarkPushDown.java:
##
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.toolbox;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.connector.source.SourceFunctionProvider;
+import 
org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown;
+import org.apache.flink.table.data.RowData;
+
+/**
+ * A source used to test {@link SupportsWatermarkPushDown}.
+ *
+ * For simplicity, the deprecated source function method is used to create 
the source.
+ */
+@SuppressWarnings("deprecation")

Review Comment:
   ```suggestion
*/
   ```
   It seems we don't need `deprecared` annotation here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-03-28 Thread via GitHub


snuyanzin commented on code in PR #24280:
URL: https://github.com/apache/flink/pull/24280#discussion_r1543058071


##
flink-end-to-end-tests/flink-sql-client-test/src/main/java/org/apache/flink/table/toolbox/TestScanTableSourceWithWatermarkPushDown.java:
##
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.toolbox;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.connector.source.SourceFunctionProvider;
+import 
org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown;
+import org.apache.flink.table.data.RowData;
+
+/**
+ * A source used to test {@link SupportsWatermarkPushDown}.
+ *
+ * For simplicity, the deprecated source function method is used to create 
the source.
+ */
+@SuppressWarnings("deprecation")

Review Comment:
   ```suggestion
   ```
   Looks we don't need it here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-03-28 Thread via GitHub


snuyanzin commented on PR #24280:
URL: https://github.com/apache/flink/pull/24280#issuecomment-2025314263

   @xuyangzhong thanks for the update
   it looks good from my side
   I put a couple of minor comments
   
   Could you also please create backports for other branches?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34937) Apache Infra GHA policy update

2024-03-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831844#comment-17831844
 ] 

Matthias Pohl commented on FLINK-34937:
---

Looks like Flink is on rank 19 in terms of runner minutes used for the past 7 
days:

[Flink-specific 
report|https://infra-reports.apache.org/#ghactions&project=flink&hours=168] 
(needs ASF committer rights)

[Global report|https://infra-reports.apache.org/#ghactions] (needs ASF 
membership)

> Apache Infra GHA policy update
> --
>
> Key: FLINK-34937
> URL: https://issues.apache.org/jira/browse/FLINK-34937
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Priority: Major
>
> There is a policy update [announced in the infra 
> ML|https://www.mail-archive.com/jdo-dev@db.apache.org/msg13638.html] which 
> asked Apache projects to limit the number of runners per job. Additionally, 
> the [GHA policy|https://infra.apache.org/github-actions-policy.html] is 
> referenced which I wasn't aware of when working on the action workflow.
> This issue is about applying the policy to the Flink GHA workflows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34961:
--
Summary: GitHub Actions runner statistcs can be monitored per workflow name 
 (was: GitHub Actions statistcs can be monitored per workflow name)

> GitHub Actions runner statistcs can be monitored per workflow name
> --
>
> Key: FLINK-34961
> URL: https://issues.apache.org/jira/browse/FLINK-34961
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>
> Apache Infra allows the monitoring of runner usage per workflow (see [report 
> for 
> Flink|https://infra-reports.apache.org/#ghactions&project=flink&hours=168&limit=10];
>   only accessible with Apache committer rights). They accumulate the data by 
> workflow name. The Flink space has multiple repositories that use the generic 
> workflow name {{CI}}). That makes the differentiation in the report harder.
> This Jira issue is about identifying all Flink-related projects with a CI 
> workflow (Kubernetes operator and the JDBC connector were identified, for 
> instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34961) GitHub Actions statistcs can be monitored per workflow name

2024-03-28 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-34961:
-

 Summary: GitHub Actions statistcs can be monitored per workflow 
name
 Key: FLINK-34961
 URL: https://issues.apache.org/jira/browse/FLINK-34961
 Project: Flink
  Issue Type: Improvement
  Components: Build System / CI
Reporter: Matthias Pohl


Apache Infra allows the monitoring of runner usage per workflow (see [report 
for 
Flink|https://infra-reports.apache.org/#ghactions&project=flink&hours=168&limit=10];
  only accessible with Apache committer rights). They accumulate the data by 
workflow name. The Flink space has multiple repositories that use the generic 
workflow name {{CI}}). That makes the differentiation in the report harder.

This Jira issue is about identifying all Flink-related projects with a CI 
workflow (Kubernetes operator and the JDBC connector were identified, for 
instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name

2024-03-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34961:
--
Labels: starter  (was: )

> GitHub Actions runner statistcs can be monitored per workflow name
> --
>
> Key: FLINK-34961
> URL: https://issues.apache.org/jira/browse/FLINK-34961
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / CI
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: starter
>
> Apache Infra allows the monitoring of runner usage per workflow (see [report 
> for 
> Flink|https://infra-reports.apache.org/#ghactions&project=flink&hours=168&limit=10];
>   only accessible with Apache committer rights). They accumulate the data by 
> workflow name. The Flink space has multiple repositories that use the generic 
> workflow name {{CI}}). That makes the differentiation in the report harder.
> This Jira issue is about identifying all Flink-related projects with a CI 
> workflow (Kubernetes operator and the JDBC connector were identified, for 
> instance) and adding a more distinct name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Bump express from 4.18.2 to 4.19.2 in /flink-runtime-web/web-dashboard [flink]

2024-03-28 Thread via GitHub


dependabot[bot] opened a new pull request, #24589:
URL: https://github.com/apache/flink/pull/24589

   Bumps [express](https://github.com/expressjs/express) from 4.18.2 to 4.19.2.
   
   Release notes
   Sourced from https://github.com/expressjs/express/releases";>express's 
releases.
   
   4.19.2
   What's Changed
   
   https://github.com/expressjs/express/commit/0b746953c4bd8e377123527db11f9cd866e39f94";>Improved
 fix for open redirect allow list bypass
   
   Full Changelog: https://github.com/expressjs/express/compare/4.19.1...4.19.2";>https://github.com/expressjs/express/compare/4.19.1...4.19.2
   4.19.1
   What's Changed
   
   Fix ci after location patch by https://github.com/wesleytodd";>@​wesleytodd in https://redirect.github.com/expressjs/express/pull/5552";>expressjs/express#5552
   fixed un-edited version in history.md for 4.19.0 by https://github.com/wesleytodd";>@​wesleytodd in https://redirect.github.com/expressjs/express/pull/5556";>expressjs/express#5556
   
   Full Changelog: https://github.com/expressjs/express/compare/4.19.0...4.19.1";>https://github.com/expressjs/express/compare/4.19.0...4.19.1
   4.19.0
   What's Changed
   
   fix typo in release date by https://github.com/UlisesGascon";>@​UlisesGascon in https://redirect.github.com/expressjs/express/pull/5527";>expressjs/express#5527
   docs: nominating https://github.com/wesleytodd";>@​wesleytodd to be 
project captian by https://github.com/wesleytodd";>@​wesleytodd in https://redirect.github.com/expressjs/express/pull/5511";>expressjs/express#5511
   docs: loosen TC activity rules by https://github.com/wesleytodd";>@​wesleytodd in https://redirect.github.com/expressjs/express/pull/5510";>expressjs/express#5510
   Add note on how to update docs for new release by https://github.com/crandmck";>@​crandmck in https://redirect.github.com/expressjs/express/pull/5541";>expressjs/express#5541
   https://redirect.github.com/expressjs/express/pull/5551/commits/660ccf5fa33dd0baab069e5c8ddd9ffe7d8bbff1";>Prevent
 open redirect allow list bypass due to encodeurl
   Release 4.19.0 by https://github.com/wesleytodd";>@​wesleytodd in https://redirect.github.com/expressjs/express/pull/5551";>expressjs/express#5551
   
   New Contributors
   
   https://github.com/crandmck";>@​crandmck made 
their first contribution in https://redirect.github.com/expressjs/express/pull/5541";>expressjs/express#5541
   
   Full Changelog: https://github.com/expressjs/express/compare/4.18.3...4.19.0";>https://github.com/expressjs/express/compare/4.18.3...4.19.0
   4.18.3
   Main Changes
   
   Fix routing requests without method
   deps: body-parser@1.20.2
   
   Fix strict json error message on Node.js 19+
   deps: content-type@~1.0.5
   deps: raw-body@2.5.2
   
   
   
   Other Changes
   
   Use https: protocol instead of deprecated git: protocol by https://github.com/vcsjones";>@​vcsjones in https://redirect.github.com/expressjs/express/pull/5032";>expressjs/express#5032
   build: Node.js@16.18 and Node.js@18.12 by https://github.com/abenhamdine";>@​abenhamdine in https://redirect.github.com/expressjs/express/pull/5034";>expressjs/express#5034
   ci: update actions/checkout to v3 by https://github.com/armujahid";>@​armujahid in https://redirect.github.com/expressjs/express/pull/5027";>expressjs/express#5027
   test: remove unused function arguments in params by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5124";>expressjs/express#5124
   Remove unused originalIndex from acceptParams by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5119";>expressjs/express#5119
   Fixed typos by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5117";>expressjs/express#5117
   examples: remove unused params by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5113";>expressjs/express#5113
   fix: parameter str is not described in JSDoc by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5130";>expressjs/express#5130
   fix: typos in History.md by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5131";>expressjs/express#5131
   build : add Node.js@19.7 by https://github.com/abenhamdine";>@​abenhamdine in https://redirect.github.com/expressjs/express/pull/5028";>expressjs/express#5028
   test: remove unused function arguments in params by https://github.com/raksbisht";>@​raksbisht in https://redirect.github.com/expressjs/express/pull/5137";>expressjs/express#5137
   
   
   
   ... (truncated)
   
   
   Changelog
   Sourced from https://github.com/expressjs/express/blob/master/History.md";>express's 
changelog.
   
   4.19.2 / 2024-03-25
   
   Improved fix for open redirect allow list bypass
   
   4.19.1 / 2024-03-20
   
   Allow passing non-strings to res.location with new encoding handli

Re: [PR] Bump express from 4.18.2 to 4.19.2 in /flink-runtime-web/web-dashboard [flink]

2024-03-28 Thread via GitHub


flinkbot commented on PR #24589:
URL: https://github.com/apache/flink/pull/24589#issuecomment-2025455793

   
   ## CI report:
   
   * c914de50878ed91a0f9997d148509e1fda770e66 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis closed FLINK-34922.
--
Fix Version/s: (was: 1.18.2)
   (was: 1.20.0)
   (was: 1.19.1)
   Resolution: Won't Fix

Closing in favor of https://issues.apache.org/jira/browse/FLINK-34922

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


[ https://issues.apache.org/jira/browse/FLINK-34922 ]


Panagiotis Garefalakis deleted comment on FLINK-34922:


was (Author: pgaref):
Closing in favor of https://issues.apache.org/jira/browse/FLINK-34922

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-34922:
---
Fix Version/s: 1.20.0

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-34922:
---
Fix Version/s: 1.19.1

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-34922:
---
Fix Version/s: 1.18.2

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reopened FLINK-34922:


> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis closed FLINK-34922.
--
Resolution: Fixed

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33121] Failed precondition in JobExceptionsHandler due to concurrent global failures [flink]

2024-03-28 Thread via GitHub


pgaref closed pull request #23440: [FLINK-33121] Failed precondition in 
JobExceptionsHandler due to concurrent global failures
URL: https://github.com/apache/flink/pull/23440


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis closed FLINK-33121.
--
Release Note: Closing in favor of 
https://issues.apache.org/jira/browse/FLINK-34922
  Resolution: Won't Fix

> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> We make the assumption that Global Failures (with null Task name) may only be 
> RootExceptions and and Local/Task exception may be part of concurrent 
> exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
> However, when the Adaptive scheduler is in a Restarting phase due to an 
> existing failure (that is now the new Root) we can still, in rare occasions, 
> capture new Global failures, violating this condition (with an assertion is 
> thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
> {code:java}
> The taskName must not be null for a non-global failure.  {code}
> We want to ignore Global failures while being in a Restarting phase on the 
> Adaptive scheduler until we properly support multiple Global failures in the 
> Exception History as part of https://issues.apache.org/jira/browse/FLINK-34922
> Note: DefaultScheduler does not suffer from this issue as it treats failures 
> directly as HistoryEntries (no conversion step)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34324][ci] Replaces AWS-based S3 e2e tests with Minio-backed version [flink]

2024-03-28 Thread via GitHub


XComp commented on code in PR #24465:
URL: https://github.com/apache/flink/pull/24465#discussion_r1543160518


##
flink-end-to-end-tests/test-scripts/test_file_sink.sh:
##
@@ -79,30 +42,69 @@ 
TEST_PROGRAM_JAR="${END_TO_END_DIR}/flink-file-sink-test/target/FileSinkProgram.
 #   sorted content of part files
 ###
 function get_complete_result {
-  if [ "${OUT_TYPE}" == "s3" ]; then
-s3_get_by_full_path_and_filename_prefix "$OUTPUT_PATH" "$S3_PREFIX" 
"part-" true
-  fi
   find "${OUTPUT_PATH}" -type f \( -iname "part-*" \) -exec cat {} + | sort -g
 }
 
 ###
 # Get total number of lines in part files.
 #
 # Globals:
-#   S3_PREFIX
+#   OUTPUT_PATH
 # Arguments:
 #   None
 # Returns:
 #   line number in part files
 ###
 function get_total_number_of_valid_lines {
-  if [ "${OUT_TYPE}" == "local" ]; then
-get_complete_result | wc -l | tr -d '[:space:]'
-  elif [ "${OUT_TYPE}" == "s3" ]; then
-s3_get_number_of_lines_by_prefix "${S3_PREFIX}" "part-"
-  fi
+  get_complete_result | wc -l | tr -d '[:space:]'
 }
 
+if [ "${OUT_TYPE}" == "local" ]; then
+  echo "[INFO] Test run in local environment: No S3 environment is not loaded."
+elif [ "${OUT_TYPE}" == "s3" ]; then
+  source "$(dirname "$0")"/common_s3_minio.sh
+  s3_setup hadoop
+
+  # overwrites JOB_OUTPUT_PATH to point to S3
+  S3_DATA_PREFIX="${RANDOM_PREFIX}"
+  S3_CHECKPOINT_PREFIX="${RANDOM_PREFIX}-chk"
+  JOB_OUTPUT_PATH="s3://$IT_CASE_S3_BUCKET/${S3_DATA_PREFIX}"
+  set_config_key "state.checkpoints.dir" 
"s3://$IT_CASE_S3_BUCKET/${S3_CHECKPOINT_PREFIX}"

Review Comment:
   The folder was not used anywhere in the old version. That is why I dropped 
this call.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Release Note:   (was: Closing in favor of 
https://issues.apache.org/jira/browse/FLINK-34922)

> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> We make the assumption that Global Failures (with null Task name) may only be 
> RootExceptions and and Local/Task exception may be part of concurrent 
> exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
> However, when the Adaptive scheduler is in a Restarting phase due to an 
> existing failure (that is now the new Root) we can still, in rare occasions, 
> capture new Global failures, violating this condition (with an assertion is 
> thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
> {code:java}
> The taskName must not be null for a non-global failure.  {code}
> We want to ignore Global failures while being in a Restarting phase on the 
> Adaptive scheduler until we properly support multiple Global failures in the 
> Exception History as part of https://issues.apache.org/jira/browse/FLINK-34922
> Note: DefaultScheduler does not suffer from this issue as it treats failures 
> directly as HistoryEntries (no conversion step)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2024-03-28 Thread Panagiotis Garefalakis (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831863#comment-17831863
 ] 

Panagiotis Garefalakis commented on FLINK-33121:


 Closing in favor of https://issues.apache.org/jira/browse/FLINK-34922

> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> We make the assumption that Global Failures (with null Task name) may only be 
> RootExceptions and and Local/Task exception may be part of concurrent 
> exceptions List (see {{{}JobExceptionsHandler#createRootExceptionInfo{}}}).
> However, when the Adaptive scheduler is in a Restarting phase due to an 
> existing failure (that is now the new Root) we can still, in rare occasions, 
> capture new Global failures, violating this condition (with an assertion is 
> thrown as part of {{{}assertLocalExceptionInfo{}}}) seeing something like:
> {code:java}
> The taskName must not be null for a non-global failure.  {code}
> We want to ignore Global failures while being in a Restarting phase on the 
> Adaptive scheduler until we properly support multiple Global failures in the 
> Exception History as part of https://issues.apache.org/jira/browse/FLINK-34922
> Note: DefaultScheduler does not suffer from this issue as it treats failures 
> directly as HistoryEntries (no conversion step)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-20398][e2e] Migrate test_batch_sql.sh to Java e2e tests framework [flink]

2024-03-28 Thread via GitHub


affo commented on code in PR #24471:
URL: https://github.com/apache/flink/pull/24471#discussion_r1543172250


##
flink-end-to-end-tests/flink-batch-sql-test/src/test/java/org/apache/flink/sql/tests/BatchSQLTest.java:
##
@@ -34,66 +35,105 @@
 import org.apache.flink.table.sinks.CsvTableSink;
 import org.apache.flink.table.sources.InputFormatTableSource;
 import org.apache.flink.table.types.DataType;
+import org.apache.flink.test.junit5.MiniClusterExtension;
+import org.apache.flink.test.resources.ResourceTestUtils;
 import org.apache.flink.types.Row;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.junit.jupiter.api.extension.RegisterExtension;
+import org.junit.jupiter.api.io.TempDir;
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.EnumSource;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.Serializable;
+import java.nio.file.Files;
+import java.nio.file.Path;
 import java.time.Instant;
 import java.time.LocalDateTime;
 import java.time.ZoneOffset;
 import java.util.Iterator;
 import java.util.NoSuchElementException;
+import java.util.Optional;
+import java.util.UUID;
 
-/**
- * End-to-end test for batch SQL queries.
- *
- * The sources are generated and bounded. The result is always constant.
- *
- * Parameters: -outputPath output file path for CsvTableSink; -sqlStatement 
SQL statement that
- * will be executed as executeSql
- */
-public class BatchSQLTestProgram {
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
+/** End-to-End tests for Batch SQL tests. */
+@ExtendWith(TestLoggerExtension.class)
+public class BatchSQLTest {
+private static final Logger LOG = 
LoggerFactory.getLogger(BatchSQLTest.class);
+
+private static final Path sqlPath =
+ResourceTestUtils.getResource("resources/sql-job-query.sql");
+
+@TempDir private Path tmp;
+
+@RegisterExtension
+private static final MiniClusterExtension MINI_CLUSTER =
+new MiniClusterExtension(
+new MiniClusterResourceConfiguration.Builder()
+.setNumberTaskManagers(2)
+.setNumberSlotsPerTaskManager(1)
+.build());
+
+private Path result;
+
+@BeforeEach
+public void before() {
+this.result = tmp.resolve(String.format("result-%s", 
UUID.randomUUID()));
+LOG.info("Results for this test will be stored at: {}", this.result);
+}
+
+@ParameterizedTest
+@EnumSource(
+value = BatchShuffleMode.class,
+names = {
+"ALL_EXCHANGES_BLOCKING",

Review Comment:
   Not an expert either, but I tried and I get an `IllegalState`:
   
   
   > At the moment, adaptive batch scheduler requires batch workloads to be 
executed with types of all edges being BLOCKING or 
HYBRID_FULL/HYBRID_SELECTIVE. To do that, you need to configure 
'execution.batch-shuffle-mode' to 'ALL_EXCHANGES_BLOCKING' or 
'ALL_EXCHANGES_HYBRID_FULL/ALL_EXCHANGES_HYBRID_SELECTIVE'. Note that for 
DataSet jobs which do not recognize the aforementioned shuffle mode, the 
ExecutionMode needs to be BATCH_FORCED to force BLOCKING shuffle



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34930] Externalize existing code from bahir-flink [flink-connector-kudu]

2024-03-28 Thread via GitHub


ferenc-csaky opened a new pull request, #1:
URL: https://github.com/apache/flink-connector-kudu/pull/1

   TODO: Explicitly state in the repo that the code was forked from 
[bahir-flink](https://github.com/apache/bahir-flink).
   
   For this step, I tried to not introduce changes in the ported logic. I 
separated semantically different changes into multiple commit, so the 
[FLINK-34930] prefixed commits summarize the actual changes.
   
   The spotless and checkstyle changes are fairly big, but most of the stuff 
were about adding javadoc and code reformat.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34930) Move existing Kudu connector code from Bahir repo to dedicated repo

2024-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34930:
---
Labels: pull-request-available  (was: )

> Move existing Kudu connector code from Bahir repo to dedicated repo
> ---
>
> Key: FLINK-34930
> URL: https://issues.apache.org/jira/browse/FLINK-34930
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kudu
>Reporter: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> Move the existing Kudu connector code from the Bahir [1] repository to the 
> dedicated connector repo.
> Code should be moved only with necessary changes (bump version, change 
> groupID, integrate to common connector CI) and we sould state explicitly that 
> the state was forked from the Bahir repo.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34161][table] Migration of RewriteMinusAllRule to java [flink]

2024-03-28 Thread via GitHub


RyanSkraba commented on code in PR #24143:
URL: https://github.com/apache/flink/pull/24143#discussion_r1543209301


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/rules/logical/RewriteMinusAllRule.java:
##
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.planner.plan.utils.SetOpRewriteUtil;
+
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.plan.RelRule;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Minus;
+import org.apache.calcite.rel.core.RelFactories;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.calcite.util.Util;
+import org.immutables.value.Value;
+
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import static org.apache.calcite.sql.type.SqlTypeName.BIGINT;
+import static 
org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable.GREATER_THAN;
+
+/**
+ * Replaces logical {@link Minus} operator using a combination of union all, 
aggregate and table
+ * function.
+ *
+ * Original Query : {@code SELECT c1 FROM ut1 EXCEPT ALL SELECT c1 FROM ut2 
}

Review Comment:
   It's not a big deal, but you might want to consider using a `` block to 
keep the readable formatting from the original scaladoc.  Of course, you might 
have to escape `>` to `>` :/ 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32673) Migrage Google PubSub connector to V2

2024-03-28 Thread Claire McCarthy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831873#comment-17831873
 ] 

Claire McCarthy commented on FLINK-32673:
-

Hey all! What's the status on this 
[PR|[https://github.com/apache/flink-connector-gcp-pubsub/pull/2]]? 

I have a new implementation of the Google Pub/Sub source connector that is 
almost ready. The goal of the new implementation is three-fold:
 # Implement 
[FLIP-27|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653748]
 # Address some users' performance concerns
 ## change the implementation from using Pu/Sub Pull API to Pub/Sub Streaming 
Pull API 
 ## implement automatic message leasing
 # Re-engage with this connector and start actively supporting it

I'm wondering how best to proceed considering the status of the currently 
outstanding PR. To me, there appears to be two options:
 # Get the outstanding PR merged in and then open a PR with the new 
implementation (which might overwrite a significant amount of what was just 
merged in)
 # We close the outstanding pull request and I open a fresh pull request with 
the new implementation

Let me know your thoughts; thanks!

 

> Migrage Google PubSub connector to V2
> -
>
> Key: FLINK-32673
> URL: https://issues.apache.org/jira/browse/FLINK-32673
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Alexander Fedulov
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.19][FLINK-34922][rest] Support concurrent global failure [flink]

2024-03-28 Thread via GitHub


zentol merged PR #24588:
URL: https://github.com/apache/flink/pull/24588


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831767#comment-17831767
 ] 

Chesnay Schepler edited comment on FLINK-34922 at 3/28/24 4:01 PM:
---

master:
dc957bfdc3aa6a8e3bce603cfc68c5c553c72220
f4c945cb9ca882ae485c2e58c74825938f154119
1.19:
faa880c703cadba4521fc8ef885a242ded4b2ac7
b54edc886ce5a533bafe74fa3629657b6266cad5


was (Author: zentol):
master:
dc957bfdc3aa6a8e3bce603cfc68c5c553c72220
f4c945cb9ca882ae485c2e58c74825938f154119

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-34922) Exception History should support multiple Global failures

2024-03-28 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-34922:
--

> Exception History should support multiple Global failures
> -
>
> Key: FLINK-34922
> URL: https://issues.apache.org/jira/browse/FLINK-34922
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Panagiotis Garefalakis
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> Before source coordinators were introduced, global failures were rare and 
> only triggered by the JM ensuring they only happened once per failure. Since 
> this has changed now we should adjust accordingly and support multiple global 
> failures as part of the exception history.
> Relevant discussion under: 
> https://github.com/apache/flink/pull/23440#pullrequestreview-1701775436



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34161][table] Migration of RewriteMinusAllRule to java [flink]

2024-03-28 Thread via GitHub


RyanSkraba commented on code in PR #24143:
URL: https://github.com/apache/flink/pull/24143#discussion_r1543215856


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/rules/logical/RewriteMinusAllRule.java:
##
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+import org.apache.flink.table.planner.plan.utils.SetOpRewriteUtil;
+
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.plan.RelRule;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Minus;
+import org.apache.calcite.rel.core.RelFactories;
+import org.apache.calcite.tools.RelBuilder;
+import org.apache.calcite.util.Util;
+import org.immutables.value.Value;
+
+import java.util.List;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import static org.apache.calcite.sql.type.SqlTypeName.BIGINT;
+import static 
org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable.GREATER_THAN;
+
+/**
+ * Replaces logical {@link Minus} operator using a combination of union all, 
aggregate and table
+ * function.
+ *
+ * Original Query : {@code SELECT c1 FROM ut1 EXCEPT ALL SELECT c1 FROM ut2 
}
+ *
+ * Rewritten Query: {@code SELECT c1 FROM ( SELECT c1, sum_val FROM ( 
SELECT c1, sum(vcol_marker)
+ * AS sum_val FROM ( SELECT c1, 1L as vcol_marker FROM ut1 UNION ALL SELECT 
c1, -1L as vcol_marker
+ * FROM ut2 ) AS union_all GROUP BY union_all.c1 ) WHERE sum_val > 0 ) LATERAL
+ * TABLE(replicate_row(sum_val, c1)) AS T(c1) }
+ *
+ * Only handle the case of input size 2.
+ */
+@Value.Enclosing
+public class RewriteMinusAllRule extends 
RelRule {
+public static final RewriteMinusAllRule INSTANCE = 
RewriteMinusAllRuleConfig.DEFAULT.toRule();
+
+protected RewriteMinusAllRule(RewriteMinusAllRuleConfig config) {
+super(config);
+}
+
+@Override
+public boolean matches(RelOptRuleCall call) {
+Minus minus = call.rel(0);
+return minus.all && minus.getInputs().size() == 2;
+}
+
+@Override
+public void onMatch(RelOptRuleCall call) {
+Minus minus = call.rel(0);
+RelNode left = minus.getInput(0);
+RelNode right = minus.getInput(1);
+
+List fields = Util.range(minus.getRowType().getFieldCount());
+
+// 1. add vcol_marker to left rel node
+RelBuilder leftBuilder = call.builder().push(left);

Review Comment:
   You've slightly changed the variable assignment here from the scala version 
(the left builder originally didn't include the `.push(left)`).  This is fine 
since the push method returns the same instance, but it should probably be 
consistent below with the right side.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-20625) Refactor Google Cloud PubSub Source in accordance with FLIP-27

2024-03-28 Thread Claire McCarthy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831880#comment-17831880
 ] 

Claire McCarthy commented on FLINK-20625:
-

I can contribute here. Just left a comment on 
[FLINK-32673|https://issues.apache.org/jira/browse/FLINK-32673]

> Refactor Google Cloud PubSub Source in accordance with FLIP-27
> --
>
> Key: FLINK-20625
> URL: https://issues.apache.org/jira/browse/FLINK-20625
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Google Cloud PubSub
>Reporter: Jakob Edding
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> The Source implementation for Google Cloud Pub/Sub should be refactored in 
> accordance with [FLIP-27: Refactor Source 
> Interface|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653748].
> *Split Enumerator*
> Pub/Sub doesn't expose any partitions to consuming applications. Therefore, 
> the implementation of the Pub/Sub Split Enumerator won't do any real work 
> discovery. Instead, a static Source Split is handed to Source Readers which 
> request a Source Split. This static Source Split merely contains details 
> about the connection to Pub/Sub and the concrete Pub/Sub subscription to use 
> but no Split-specific information like partitions/offsets because this 
> information can not be obtained.
> *Source Reader*
> A Source Reader will use Pub/Sub's pull mechanism to read new messages from 
> the Pub/Sub subscription specified in the SourceSplit. In the case of 
> parallel-running Source Readers in Flink, every Source Reader will be passed 
> the same Source Split from the Enumerator. Because of this, all Source 
> Readers use the same connection details and the same Pub/Sub subscription to 
> receive messages. In this case, Pub/Sub will automatically load-balance 
> messages between all Source Readers pulling from the same subscription. This 
> way, parallel processing can be achieved in the Source.
> *At-least-once guarantee*
> Pub/Sub itself guarantees at-least-once message delivery so it is the goal to 
> keep up this guarantee in the Source as well. A mechanism that can be used to 
> achieve this is that Pub/Sub expects a message to be acknowledged by the 
> subscriber to signal that the message has been consumed successfully. Any 
> message that has not been acknowledged yet will be automatically redelivered 
> by Pub/Sub once an ack deadline has passed.
> After a certain time interval has elapsed...
>  # all pulled messages are checkpointed in the Source Reader
>  # same messages are acknowledged to Pub/Sub
>  # same messages are forwarded to downstream Flink tasks
> This should ensure at-least-once delivery in the Source because in the case 
> of failure, non-checkpointed messages have not yet been acknowledged and will 
> therefore be redelivered by the Pub/Sub service.
> Because of the static Source Split, it appears like checkpointing is not 
> necessary in the Split Enumerator.
> *Possible exactly-once guarantee*
> It should even be possible to achieve exactly-once guarantees for the source. 
> The following requirements would have to be met to have an exactly-once mode 
> besides the at-least-once mode similar to how it is done in the [current 
> RabbitMQ 
> Source|https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/rabbitmq.html#rabbitmq-source]:
>  * The system which publishes messages to Pub/Sub must add an id to each 
> message so that messages can be deduplicated in the Source.
>  * The Source must run in a non-parallel fashion (with parallelism=1).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >