[jira] [Commented] (FLINK-34633) Support unnesting array constants

2024-06-08 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853436#comment-17853436
 ] 

Jeyhun Karimov commented on FLINK-34633:


Hi [~fanrui] 

Yes, considering that patch versions should not include new features, then this 
PR should not be part of 1.19.1 release. 

IMO, fix version should be updated to 1.20.0. Do you agree [~qingyue] ?

> Support unnesting array constants
> -
>
> Key: FLINK-34633
> URL: https://issues.apache.org/jira/browse/FLINK-34633
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Assignee: Jeyhun Karimov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.1
>
>
> It seems that the current planner doesn't support using UNNEST on array 
> constants.(x)
> {code:java}
> SELECT * FROM UNNEST(ARRAY[1,2,3]);{code}
>  
> The following query can't be compiled.(x)
> {code:java}
> SELECT * FROM (VALUES('a')) CROSS JOIN UNNEST(ARRAY[1, 2, 3]){code}
>  
> The rewritten version works. (/)
> {code:java}
> SELECT * FROM (SELECT *, ARRAY[1,2,3] AS A FROM (VALUES('a'))) CROSS JOIN 
> UNNEST(A){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34633) Support unnesting array constants

2024-06-08 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853436#comment-17853436
 ] 

Jeyhun Karimov edited comment on FLINK-34633 at 6/8/24 9:35 PM:


Hi [~fanrui] 

Very good point. Yes, considering that patch versions should not include new 
features, then this PR should not be part of 1.19.1 release. 

IMO, fix version should be updated to 1.20.0. Do you agree [~qingyue] ?


was (Author: jeyhunkarimov):
Hi [~fanrui] 

Yes, considering that patch versions should not include new features, then this 
PR should not be part of 1.19.1 release. 

IMO, fix version should be updated to 1.20.0. Do you agree [~qingyue] ?

> Support unnesting array constants
> -
>
> Key: FLINK-34633
> URL: https://issues.apache.org/jira/browse/FLINK-34633
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.1
>Reporter: Xingcan Cui
>Assignee: Jeyhun Karimov
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.1
>
>
> It seems that the current planner doesn't support using UNNEST on array 
> constants.(x)
> {code:java}
> SELECT * FROM UNNEST(ARRAY[1,2,3]);{code}
>  
> The following query can't be compiled.(x)
> {code:java}
> SELECT * FROM (VALUES('a')) CROSS JOIN UNNEST(ARRAY[1, 2, 3]){code}
>  
> The rewritten version works. (/)
> {code:java}
> SELECT * FROM (SELECT *, ARRAY[1,2,3] AS A FROM (VALUES('a'))) CROSS JOIN 
> UNNEST(A){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35548] Add E2E tests for PubSubSinkV2 [flink-connector-gcp-pubsub]

2024-06-08 Thread via GitHub


jeyhunkarimov commented on code in PR #28:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/28#discussion_r1632110151


##
flink-connector-gcp-pubsub-e2e-tests/src/test/java/org/apache/flink/connector/gcp/pubsub/sink/util/PubsubHelper.java:
##
@@ -36,27 +40,50 @@
 import com.google.pubsub.v1.ReceivedMessage;
 import com.google.pubsub.v1.Topic;
 import com.google.pubsub.v1.TopicName;
+import io.grpc.ManagedChannel;
+import io.grpc.ManagedChannelBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Duration;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
 /** A helper class to make managing the testing topics a bit easier. */
 public class PubsubHelper {
 
 private static final Logger LOG = 
LoggerFactory.getLogger(PubsubHelper.class);
 
+private static final Duration SHUTDOWN_TIMEOUT = Duration.ofSeconds(5);
+
+private ManagedChannel channel;

Review Comment:
   `private final ManagedChannel channel ` ?



##
flink-connector-gcp-pubsub-e2e-tests/src/test/java/org/apache/flink/connector/gcp/pubsub/sink/PubSubSinkV2ITTests.java:
##
@@ -0,0 +1,115 @@
+package org.apache.flink.connector.gcp.pubsub.sink;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import 
org.apache.flink.connector.datagen.functions.FromElementsGeneratorFunction;
+import org.apache.flink.connector.datagen.source.DataGeneratorSource;
+import org.apache.flink.connector.gcp.pubsub.sink.config.GcpPublisherConfig;
+import org.apache.flink.connector.gcp.pubsub.sink.util.PubsubHelper;
+import org.apache.flink.connector.gcp.pubsub.sink.util.TestChannelProvider;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.connectors.gcp.pubsub.emulator.EmulatorCredentialsProvider;
+import 
org.apache.flink.streaming.connectors.gcp.pubsub.test.DockerImageVersions;
+import org.apache.flink.test.junit5.MiniClusterExtension;
+import org.apache.flink.util.TestLogger;
+
+import com.google.pubsub.v1.ReceivedMessage;
+import org.assertj.core.api.Assertions;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+import org.junit.jupiter.api.parallel.Execution;
+import org.junit.jupiter.api.parallel.ExecutionMode;
+import org.testcontainers.containers.PubSubEmulatorContainer;
+import org.testcontainers.junit.jupiter.Container;
+import org.testcontainers.junit.jupiter.Testcontainers;
+import org.testcontainers.utility.DockerImageName;
+
+import java.io.IOException;
+import java.util.List;
+
+/** Integration tests for {@link PubSubSinkV2} using {@link 
PubSubEmulatorContainer}. */
+@ExtendWith(MiniClusterExtension.class)
+@Execution(ExecutionMode.CONCURRENT)
+@Testcontainers
+class PubSubSinkV2ITTests extends TestLogger {
+
+private static final String PROJECT_ID = "test-project";
+
+private static final String TOPIC_ID = "test-topic";
+
+private static final String SUBSCRIPTION_ID = "test-subscription";
+
+private StreamExecutionEnvironment env;
+
+@Container
+private static final PubSubEmulatorContainer PUB_SUB_EMULATOR_CONTAINER =
+new PubSubEmulatorContainer(
+
DockerImageName.parse(DockerImageVersions.GOOGLE_CLOUD_PUBSUB_EMULATOR));
+
+private PubsubHelper pubSubHelper;
+
+@BeforeEach
+void setUp() throws IOException {
+env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setParallelism(1);

Review Comment:
   Any specific reason for `parallelism=1`?



##
flink-connector-gcp-pubsub-e2e-tests/src/test/java/org/apache/flink/connector/gcp/pubsub/sink/util/PubsubHelper.java:
##
@@ -36,27 +40,50 @@
 import com.google.pubsub.v1.ReceivedMessage;
 import com.google.pubsub.v1.Topic;
 import com.google.pubsub.v1.TopicName;
+import io.grpc.ManagedChannel;
+import io.grpc.ManagedChannelBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Duration;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
 /** A helper class to make managing the testing topics a bit easier. */
 public class PubsubHelper {
 
 private static final Logger LOG = 
LoggerFactory.getLogger(PubsubHelper.class);
 
+private static final Duration SHUTDOWN_TIMEOUT = Duration.ofSeconds(5);
+
+private ManagedChannel channel;
+
 private TransportChannelProvider channelProvider;
 
 private TopicAdminClient topicClient;
+
 private 

Re: [PR] [FLINK-34924][table] Support partition pushdown for join queries [flink]

2024-06-08 Thread via GitHub


jeyhunkarimov commented on PR #24559:
URL: https://github.com/apache/flink/pull/24559#issuecomment-2156176949

   Hi @libenchao kindly reminder for review in your available time. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Description: 
Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.
{code:java}
java.lang.ClassCastException: class 
software.amazon.awssdk.services.firehose.model.FirehoseException cannot be cast 
to class 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.awscore.exception.AwsServiceException
 (software.amazon.awssdk.services.firehose.model.FirehoseException and 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.awscore.exception.AwsServiceException
 are in unnamed module of loader 'app')
at 
org.apache.flink.connector.aws.sink.throwable.AWSExceptionClassifierUtil.lambda$withAWSServiceErrorCode$0(AWSExceptionClassifierUtil.java:62)
 ~[flink-connector-kinesis-4.3-SNAPSHOT.jar:4.3-SNAPSHOT]
at 
org.apache.flink.connector.base.sink.throwable.FatalExceptionClassifier.isFatal(FatalExceptionClassifier.java:45)
 ~[flink-connector-base-1.19.0.jar:1.19.0]
at 
org.apache.flink.connector.base.sink.throwable.FatalExceptionClassifier.isFatal(FatalExceptionClassifier.java:51)
 ~[flink-connector-base-1.19.0.jar:1.19.0]
at 
org.apache.flink.connector.base.sink.throwable.FatalExceptionClassifier.isFatal(FatalExceptionClassifier.java:51)
 ~[flink-connector-base-1.19.0.jar:1.19.0]
at 
org.apache.flink.connector.base.sink.throwable.FatalExceptionClassifier.isFatal(FatalExceptionClassifier.java:51)
 ~[flink-connector-base-1.19.0.jar:1.19.0]
at 
org.apache.flink.connector.aws.sink.throwable.AWSExceptionHandler.consumeIfFatal(AWSExceptionHandler.java:53)
 ~[flink-connector-kinesis-4.3-SNAPSHOT.jar:4.3-SNAPSHOT]
at 
org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkWriter.handleFullyFailedRequest(KinesisFirehoseSinkWriter.java:218)
 ~[flink-connector-aws-kinesis-firehose-4.3-SNAPSHOT.jar:4.3-SNAPSHOT]
at 
org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkWriter.lambda$submitRequestEntries$0(KinesisFirehoseSinkWriter.java:189)
 ~[flink-connector-aws-kinesis-firehose-4.3-SNAPSHOT.jar:4.3-SNAPSHOT]
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2094)
 ~[?:?]
at 
software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
 ~[utils-2.20.144.jar:?]
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2094)
 ~[?:?]
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallMetricCollectionStage.lambda$execute$0(AsyncApiCallMetricCollectionStage.java:56)
 ~[sdk-core-2.20.144.jar:?]
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2094)
 ~[?:?]
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallTimeoutTrackingStage.lambda$execute$2(AsyncApiCallTimeoutTrackingStage.java:67)
 ~[sdk-core-2.20.144.jar:?]
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2094)
 ~[?:?]
at 
software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
 ~[utils-2.20.144.jar:?]
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
 ~[?:?]
at 

[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Description: 
Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.

AWSExceptionClassifierUtil is not currently being relocated during build of 
flink-connector-kinesis, but it depends on AwsServiceException that is being 
relocated, 
 !Screenshot 2024-06-08 at 18.19.30.png! 

  was:
Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.

 !Screenshot 2024-06-08 at 18.19.30.png! 


> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.4.0
>
> Attachments: Screenshot 2024-06-08 at 18.19.30.png
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with 
> flink-connector-aws-kinesis-firehose.
> AWSExceptionClassifierUtil is not currently being relocated during build of 
> flink-connector-kinesis, but it depends on AwsServiceException that is being 
> relocated, 
>  !Screenshot 2024-06-08 at 18.19.30.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Description: 
Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.



  was:Incorrect shading configuration causes ClassCastException during 
exception handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.


> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.4.0
>
> Attachments: Screenshot 2024-06-08 at 18.19.30.png
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with 
> flink-connector-aws-kinesis-firehose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Attachment: Screenshot 2024-06-08 at 18.19.30.png

> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.4.0
>
> Attachments: Screenshot 2024-06-08 at 18.19.30.png
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with 
> flink-connector-aws-kinesis-firehose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Description: 
Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.

 !Screenshot 2024-06-08 at 18.19.30.png! 

  was:
Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.




> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.4.0
>
> Attachments: Screenshot 2024-06-08 at 18.19.30.png
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with 
> flink-connector-aws-kinesis-firehose.
>  !Screenshot 2024-06-08 at 18.19.30.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35559:
---
Labels: pull-request-available  (was: )

> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.4.0
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with 
> flink-connector-aws-kinesis-firehose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35559][Connector/Kinesis] fix shading configuration to avoid class conflicts when using flink-connector-kinesis alongside Firehose sink [flink-connector-aws]

2024-06-08 Thread via GitHub


z3d1k opened a new pull request, #142:
URL: https://github.com/apache/flink-connector-aws/pull/142

   
   
   ## Purpose of the change
   
   Add `org.apache.flink.connector.aws.sink` to relocation list in order to 
prevent class conflicts while including `flink-connector-kinesis` alongside 
with `flink-connector-aws-kinesis-firehose`.
   
   ## Verifying this change
   
   - Manually verified by running the Kinesis connector and Firehose sink on a 
local Flink cluster.
   - Verified build artifacts
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Description: Incorrect shading configuration causes ClassCastException 
during exception handling when job package flink-connector-kinesis with 
flink-connector-aws-kinesis-firehose.  (was: Incorrect shading configuration 
causes ClassCastException during exception handling when job package 
flink-connector-kinesis with another AWS sink.)

> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
> Fix For: aws-connector-4.4.0
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with 
> flink-connector-aws-kinesis-firehose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-35559:

Component/s: Connectors / Firehose
 Connectors / Kinesis

> Shading issue cause class conflict
> --
>
> Key: FLINK-35559
> URL: https://issues.apache.org/jira/browse/FLINK-35559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: aws-connector-4.2.0, aws-connector-4.3.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
> Fix For: aws-connector-4.4.0
>
>
> Incorrect shading configuration causes ClassCastException during exception 
> handling when job package flink-connector-kinesis with another AWS sink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35559) Shading issue cause class conflict

2024-06-08 Thread Aleksandr Pilipenko (Jira)
Aleksandr Pilipenko created FLINK-35559:
---

 Summary: Shading issue cause class conflict
 Key: FLINK-35559
 URL: https://issues.apache.org/jira/browse/FLINK-35559
 Project: Flink
  Issue Type: Bug
  Components: Connectors / AWS
Affects Versions: aws-connector-4.3.0, aws-connector-4.2.0
Reporter: Aleksandr Pilipenko
 Fix For: aws-connector-4.4.0


Incorrect shading configuration causes ClassCastException during exception 
handling when job package flink-connector-kinesis with another AWS sink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35548) Add E2E tests for PubSubSinkV2

2024-06-08 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853409#comment-17853409
 ] 

Ahmed Hamdy commented on FLINK-35548:
-

[~snuyanzin] Could you please help review this one?

> Add E2E tests for PubSubSinkV2
> --
>
> Key: FLINK-35548
> URL: https://issues.apache.org/jira/browse/FLINK-35548
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: gcp-pubsub-3.2.0
>
>
> Add E2E tests for 
> [PubSubSinkV2|https://issues.apache.org/jira/browse/FLINK-24298] in [E2E 
> tests 
> module|https://github.com/apache/flink-connector-gcp-pubsub/tree/main/flink-connector-gcp-pubsub-e2e-tests]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [Flink-35473][table] Improve Table/SQL Configuration for Flink 2.0 [flink]

2024-06-08 Thread via GitHub


lincoln-lil commented on code in PR #24889:
URL: https://github.com/apache/flink/pull/24889#discussion_r1632045102


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/optimize/RelNodeBlock.scala:
##
@@ -352,7 +353,14 @@ class RelNodeBlockPlanBuilder private (tableConfig: 
ReadableConfig) {
 
 object RelNodeBlockPlanBuilder {
 
-  // It is a experimental config, will may be removed later.
+  /**
+   * Whether to treat union-all node as a breakpoint.
+   * @deprecated
+   *   This configuration has been deprecated as part of FLIP-457. Please use
+   *   
[[org.apache.flink.table.api.config.OptimizerConfigOptions.TABLE_OPTIMIZER_UNIONALL_AS_BREAKPOINT_ENABLED]]
+   *   instead.
+   */
+  @Deprecated

Review Comment:
   The cautious approach makes sense. In addition to marking it deprecated we'd 
better also add a note that it will be removed in version 2.0.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34482][checkpoint] Rename checkpointing options [flink]

2024-06-08 Thread via GitHub


masteryhx commented on code in PR #24878:
URL: https://github.com/apache/flink/pull/24878#discussion_r1632041988


##
flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java:
##
@@ -97,11 +97,11 @@ public class CheckpointingOptions {
  * to {@link #CHECKPOINTS_DIRECTORY}, and the checkpoint meta data and 
actual program state
  * will both be persisted to the path.
  */
-@Documentation.Section(value = 
Documentation.Sections.COMMON_STATE_BACKENDS, position = 2)

Review Comment:
   I have introduced a new section `common_checkpointing` for them.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34482][checkpoint] Rename checkpointing options [flink]

2024-06-08 Thread via GitHub


masteryhx commented on code in PR #24878:
URL: https://github.com/apache/flink/pull/24878#discussion_r1632041854


##
flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java:
##
@@ -210,11 +209,11 @@ public class CheckpointingOptions {
  * Local recovery currently only covers keyed state backends. 
Currently, MemoryStateBackend
  * does not support local recovery and ignore this option.
  */
-@Documentation.Section(Documentation.Sections.COMMON_STATE_BACKENDS)
-public static final ConfigOption 
LOCAL_RECOVERY_TASK_MANAGER_STATE_ROOT_DIRS =

Review Comment:
   I just remain it as before.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35138) Release flink-connector-kafka v3.2.0 for Flink 1.19

2024-06-08 Thread yazgoo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853367#comment-17853367
 ] 

yazgoo commented on FLINK-35138:


Awesome !
Thanks for your work !

> Release flink-connector-kafka v3.2.0 for Flink 1.19
> ---
>
> Key: FLINK-35138
> URL: https://issues.apache.org/jira/browse/FLINK-35138
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: kafka-3.2.0
>
>
> https://github.com/apache/flink-connector-kafka



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [Flink-35473][table] Improve Table/SQL Configuration for Flink 2.0 [flink]

2024-06-08 Thread via GitHub


LadyForest commented on code in PR #24889:
URL: https://github.com/apache/flink/pull/24889#discussion_r1631874007


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/optimize/RelNodeBlock.scala:
##
@@ -352,7 +353,14 @@ class RelNodeBlockPlanBuilder private (tableConfig: 
ReadableConfig) {
 
 object RelNodeBlockPlanBuilder {
 
-  // It is a experimental config, will may be removed later.
+  /**
+   * Whether to treat union-all node as a breakpoint.
+   * @deprecated
+   *   This configuration has been deprecated as part of FLIP-457. Please use
+   *   
[[org.apache.flink.table.api.config.OptimizerConfigOptions.TABLE_OPTIMIZER_UNIONALL_AS_BREAKPOINT_ENABLED]]
+   *   instead.
+   */
+  @Deprecated

Review Comment:
   > IIUC, the proposed change 'Move the configurations to ...' of the FLIP 
means that we can just make the move without having to keep the old positions 
because we haven't changed these configuration option names, so If there are no 
objections, you can remove the option from the original classes.
   
   Yes, it's the original plan. But during the 
[discussion](https://lists.apache.org/thread/n52smchwgr6qfg3tfvmxpdshk8obgor2), 
Xuannan mentioned the API compatibility guarantees. The only concern is that 
although there's no constraint on the Experimental API, and in theory, users 
should not directly rely on table-planner in their projects, we're still not 
sure whether users rely on these variables in their code.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31533) CREATE TABLE AS SELECT should support to define partition

2024-06-08 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853357#comment-17853357
 ] 

luoyuxia commented on FLINK-31533:
--

[~spena] Greate! Feel free to take it!

> CREATE TABLE AS SELECT should support to define partition
> -
>
> Key: FLINK-31533
> URL: https://issues.apache.org/jira/browse/FLINK-31533
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33192) State memory leak in the Window Operator due to unregistered cleanup timer

2024-06-08 Thread Kartikey Pant (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853344#comment-17853344
 ] 

Kartikey Pant commented on FLINK-33192:
---

Hello [~Yanfei Lei],

I've submitted a Pull Request ([GitHub Pull Request 
#24917|https://github.com/apache/flink/pull/24917]) aimed at resolving the 
reported bug. Additionally, I believe there may be a memory leak possibility 
within the MergingWindow section 
([https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperator.java#L369-L388]).
 Consequently, I've included the relevant modification in this pull request as 
well.

Could you please review it and let me know if everything looks good, or if any 
further adjustments or validations are necessary?

Thank you.

> State memory leak in the Window Operator due to unregistered cleanup timer
> --
>
> Key: FLINK-33192
> URL: https://issues.apache.org/jira/browse/FLINK-33192
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.6, 1.15.4, 1.16.2, 1.17.1
>Reporter: Vidya Sagar Kalvakunta
>Assignee: Kartikey Pant
>Priority: Major
>  Labels: easyfix, pull-request-available
>
> I have encountered a state memory leak issue in the default window operator. 
> The cleanup timer for a window is not registered when it does not emit a 
> result if it’s fired immediately after creation. The window is added to the 
> window state and as the cleanup timer isn't registered, it's never cleaned 
> up, allowing it to live forever.
> *Steps to Reproduce:*
>  # Write a custom trigger that triggers for every element.
>  # Write a custom aggregate function that never produces a result.
>  # Use a default tumbling event time window with this custom trigger and 
> aggregate function.
>  # Publish events spanning multiple time windows.
>  # The window state will contain all the windows even after their 
> expiry/cleanup time.
> *Code with the bug:*
> [https://github.com/apache/flink/blob/cd95b560d0c11a64b42bf6b98107314d32a4de86/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperator.java#L398-L417|https://github.com/apache/flink/blob/cd95b560d0c11a64b42bf6b98107314d32a4de86/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperator.java#L399-L417]
> {code:java}
> windowState.setCurrentNamespace(window);
> windowState.add(element.getValue());
> if (triggerResult.isFire()) {
> ACC contents = windowState.get();
> if (contents == null) {
> continue;
> }
> emitWindowContents(window, contents);
> }
> if (triggerResult.isPurge()) {
> windowState.clear();
> }
> registerCleanupTimer(window);{code}
>  
> *Expected Result:*
> The cleanup timer should be registered for every window that's added to the 
> window state regardless of it emitting a result after it’s fired.
> *Actual Result:*
> The cleanup timer is not registered for a window when it does not emit a 
> result after it’s fired, causing the window state that is already created to 
> live on indefinitely.
> *Impact:*
> This issue led to a huge state memory leak in our applications and was very 
> challenging to identify.
>  
> *Fix:*
> There are two ways to fix this issue. I'm willing to create a PR with the fix 
> if approved.
> 1. Register the cleanup timer immediately after a window is added to the 
> state.
> {code:java}
> windowState.setCurrentNamespace(window);
> windowState.add(element.getValue());
> registerCleanupTimer(window);{code}
> 2. Emit the results when the contents are not null and remove the continue 
> statement.
> {code:java}
> if (triggerResult.isFire()) {
> ACC contents = windowState.get();
> if (contents != null) {         
>         emitWindowContents(window, contents);
>     }
> } {code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33192] Fix Memory Leak in WindowOperator due to Improper Timer Cleanup [flink]

2024-06-08 Thread via GitHub


flinkbot commented on PR #24917:
URL: https://github.com/apache/flink/pull/24917#issuecomment-2155861840

   
   ## CI report:
   
   * 7d3bbbd9e083bca422d8ea0f1d8928f2fb9ffe53 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33192] Fix Memory Leak in WindowOperator due to Improper Timer Cleanup [flink]

2024-06-08 Thread via GitHub


kartikeypant opened a new pull request, #24917:
URL: https://github.com/apache/flink/pull/24917

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org