[GitHub] [flink] TanYuxin-tyx commented on a diff in pull request #22330: [FLINK-31635][network] Support writing records to the new tiered store architecture

2023-05-07 Thread via GitHub


TanYuxin-tyx commented on code in PR #22330:
URL: https://github.com/apache/flink/pull/22330#discussion_r1187052528


##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartitionID.java:
##
@@ -23,6 +23,9 @@
 import org.apache.flink.runtime.executiongraph.IntermediateResultPartition;
 import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
 
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;

Review Comment:
   OK, we can remove the dependcies with the follow-up ticket.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pgaref commented on a diff in pull request #22520: [FLINK-31996] Chaining operators with different max parallelism prevents rescaling

2023-05-07 Thread via GitHub


pgaref commented on code in PR #22520:
URL: https://github.com/apache/flink/pull/22520#discussion_r1187021736


##
flink-core/src/main/java/org/apache/flink/configuration/PipelineOptions.java:
##
@@ -243,13 +243,22 @@ public class PipelineOptions {
 .build());
 
 public static final ConfigOption OPERATOR_CHAINING =
-key("pipeline.operator-chaining")
+key("pipeline.operator-chaining.enabled")
 .booleanType()
 .defaultValue(true)
+.withDeprecatedKeys("pipeline.operator-chaining")
 .withDescription(
 "Operator chaining allows non-shuffle operations 
to be co-located in the same thread "
 + "fully avoiding serialization and 
de-serialization.");
 
+public static final ConfigOption
+OPERATOR_CHAINING_CHAIN_OPERATORS_WITH_DIFFERENT_MAX_PARALLELISM =
+
key("pipeline.operator-chaining.chain-operators-with-different-max-parallelism")
+.booleanType()
+.defaultValue(true)
+.withDescription(
+"Operators with different max parallelism 
can be chained together.");

Review Comment:
   Shall we add a note here like: `Default behavior may prevent rescaling when 
the AdaptiveScheduler is used` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32019) EARLIEST offset strategy for partitions discoveried later based on FLIP-288

2023-05-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32019:
---
Labels: pull-request-available  (was: )

> EARLIEST offset strategy for partitions discoveried later based on FLIP-288
> ---
>
> Key: FLINK-32019
> URL: https://issues.apache.org/jira/browse/FLINK-32019
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.0.0
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: kafka-4.0.0
>
>
> As described in 
> [FLIP-288|https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source],
>  the strategy used for new partitions is the same as the initial offset 
> strategy, which is not reasonable.
> According to the semantics, if the startup strategy is latest, the consumed 
> data should include all data from the moment of startup, which also includes 
> all messages from new created partitions. However, the latest strategy 
> currently maybe used for new partitions, leading to the loss of some data 
> (thinking a new partition is created and might be discovered by Kafka source 
> several minutes later, and the message produced into the partition within the 
> gap might be dropped if we use for example "latest" as the initial offset 
> strategy).if the data from all new partitions is not read, it does not meet 
> the user's expectations.
> Other ploblems see final Section of 
> [FLIP-288|https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source]:
>  {{User specifies OffsetsInitializer for new partition}} .
> Therefore, it’s better to provide an *EARLIEST* strategy for later discovered 
> partitions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32021) Improvement the Javadoc for SpecifiedOffsetsInitializer and TimestampOffsetsInitializer.

2023-05-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32021:
---
Labels: pull-request-available  (was: )

> Improvement the Javadoc for SpecifiedOffsetsInitializer and 
> TimestampOffsetsInitializer.
> 
>
> Key: FLINK-32021
> URL: https://issues.apache.org/jira/browse/FLINK-32021
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.0.0
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kafka-4.0.0
>
>
> As described in 
> [FLIP-288|https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source],
>  Current JavaDoc does not fully explain the behavior of OffsetsInitializers. 
> When the partition does not meet the condition, there will be a different 
> offset strategy. This may lead to misunderstandings in the design and usage.
>  
> Add to SpecifiedOffsetsInitializer: "Use Specified offset for specified 
> partitions while use commit offset or Earliest for unspecified partitions. 
> Specified partition offset should be less than the latest offset, otherwise 
> it will start from the earliest."
>  
> Add to TimestampOffsetsInitializer:Initialize the offsets based on a 
> timestamp. If the message meeting the requirement of the timestamp have not 
> been produced to Kafka yet, just use the latest offset.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-kafka] loserwang1024 opened a new pull request, #29: [FLINK-32021][Connectors/Kafka] Improvement the Javadoc for SpecifiedOffsetsInitializer and TimestampOffsetsInitializer

2023-05-07 Thread via GitHub


loserwang1024 opened a new pull request, #29:
URL: https://github.com/apache/flink-connector-kafka/pull/29

   ### What is the purpose of the change
   
   As described in 
[[FLIP-288](https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source)](https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source),
 Current JavaDoc does not fully explain the behavior of OffsetsInitializers. 
When the partition does not meet the condition, there will be a different 
offset strategy. 
   
   This may lead to misunderstandings in the design and usage.
   
   ### Brief change log
   
   Add to SpecifiedOffsetsInitializer: "Use Specified offset for specified 
partitions while use commit offset or Earliest for unspecified partitions. 
Specified partition offset should be less than the latest offset, otherwise it 
will start from the earliest."
   
   Add to TimestampOffsetsInitializer:Initialize the offsets based on a 
timestamp. If the message meeting the requirement of the timestamp have not 
been produced to Kafka yet, just use the latest offset.
   
   ### Verifying this change
   
   no code changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] loserwang1024 closed pull request #28: EARLIEST offset strategy for partitions discoveried later based on FLIP-288

2023-05-07 Thread via GitHub


loserwang1024 closed pull request #28: EARLIEST offset strategy for partitions 
discoveried later based on FLIP-288
URL: https://github.com/apache/flink-connector-kafka/pull/28


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] loserwang1024 opened a new pull request, #28: EARLIEST offset strategy for partitions discoveried later based on FLIP-288

2023-05-07 Thread via GitHub


loserwang1024 opened a new pull request, #28:
URL: https://github.com/apache/flink-connector-kafka/pull/28

   ### What is the purpose of the change
   
   As described in 
[[FLIP-288](https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source)](https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source),
 the strategy used for new partitions is the same as the initial offset 
strategy, which is not reasonable.
   
   According to the semantics, if the startup strategy is latest, the consumed 
data should include all data from the moment of startup, which also includes 
all messages from new created partitions. However, the latest strategy 
currently maybe used for new partitions, leading to the loss of some data 
(thinking a new partition is created and might be discovered by Kafka source 
several minutes later, and the message produced into the partition within the 
gap might be dropped if we use for example "latest" as the initial offset 
strategy).if the data from all new partitions is not read, it does not meet the 
user's expectations.
   
   Other ploblems see final Section: `User specifies OffsetsInitializer for new 
partition` .
   
   Therefore,  it’s better to provide an **EARLIEST** strategy for later 
discovered partitions.
   
   ### Brief change log
   
   1. Expand `KafkaSourceEnumState` with `TopicPartitionWithAssignStatus` to 
distinguish between initial partitions and newly discovered partitions. 
`TopicPartitionWithAssignStatus` is also better for future expansion, as new 
statuses can be added without changing the state results.
   2. Add a `newDiscoveryOffsetsInitializer`(EARLIEST) to get offsets for newly 
discovered partitions. 
   3. Modify `kafkaSourceEnumStateSerializer` to handle the expanded 
`KafkaSourceEnumState`.
   
   ### Verifying this change
   
   1. Test the backward compatibility of state when deserializing in 
`KafkaSourceEnumStateSerializerTest`.
   2. Expand `KafkaEnumeratorTest#testSnapshotState` method to test snapshot 
state in more scenarios:
   1. Before first discovery, so the state should be empty
   2. First partition discovery after start, but no assignments to readers
   3. Assign partials partitions to readers
   4. Assign all partitions to readers
   3. Expand `KafkaEnumeratorTest#testDiscoverPartitionsPeriodically` method to 
test whether new partitions use EARLIEST offset while initial partitions use 
specified offset strategy.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] tisonkun merged pull request #44: [FLINK-32003] Upgrade pulsar-client version to work with OAuth2

2023-05-07 Thread via GitHub


tisonkun merged PR #44:
URL: https://github.com/apache/flink-connector-pulsar/pull/44


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zhuzhurk commented on a diff in pull request #22506: [FLINK-31890][runtime] Introduce JobMaster per-task failure enrichment/labeling

2023-05-07 Thread via GitHub


zhuzhurk commented on code in PR #22506:
URL: https://github.com/apache/flink/pull/22506#discussion_r1186678825


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java:
##
@@ -473,26 +500,50 @@ public CompletableFuture cancel(Time 
timeout) {
 @Override
 public CompletableFuture updateTaskExecutionState(
 final TaskExecutionState taskExecutionState) {
-FlinkException taskExecutionException;
+checkNotNull(taskExecutionState, "taskExecutionState");
+// Use the main/caller thread for all updates to make sure they are 
processed in order.
+// (MainThreadExecutor i.e., the akka thread pool does not guarantee 
that)
+// Only detach for a FAILED state update that is terminal and may 
perform io heavy labeling.
+if 
(ExecutionState.FAILED.equals(taskExecutionState.getExecutionState())) {
+return labelFailure(taskExecutionState)
+.thenApplyAsync(
+taskStateWithLabels -> {
+try {
+return 
doUpdateTaskExecutionState(taskStateWithLabels);
+} catch (FlinkException e) {
+throw new CompletionException(e);
+}
+},
+getMainThreadExecutor());
+}
 try {
-checkNotNull(taskExecutionState, "taskExecutionState");
+return CompletableFuture.completedFuture(
+doUpdateTaskExecutionState(taskExecutionState));
+} catch (FlinkException e) {
+return FutureUtils.completedExceptionally(e);
+}
+}
 
+private Acknowledge doUpdateTaskExecutionState(final TaskExecutionState 
taskExecutionState)
+throws FlinkException {
+@Nullable FlinkException taskExecutionException;
+try {
 if (schedulerNG.updateTaskExecutionState(taskExecutionState)) {

Review Comment:
   > There has been a consensus that people want to use this for implementing 
custom restart strategies 
   
   If we expect failure enricher to be a crucial part, I would doubt whether we 
should make it an asynchronous process. Before the enrichment is done, a failed 
job cannot recover, which may last for minutes. 
   
   Also considering future custom restart strategy, one open question is it 
should act in an asynchronous way, or in a synchronous way to simplify 
implementation? If taking the former option, a simpler implementation("not 
treating failure labeling a critical process at the moment") may be more 
friendlier for future development(less likely to be changed over and over). If 
taking the latter option, should the failure enrichment also be synchronous?
   
   Therefore I hesitate to complicate the implementation at the moment before 
the FLIP of custom restart strategy is finalized.
   The concern was raised by not answered because custom restart strategy was 
excluded from the FLIP discussion.
   https://lists.apache.org/thread/zoyltjb3k4v9zsbczm8yb4zrw659540v



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32027) Batch jobs could hang at shuffle phase when max parallelism is really large

2023-05-07 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720378#comment-17720378
 ] 

Yun Tang commented on FLINK-32027:
--

[~Weijie Guo] [~zhuzh] Please take a look at this issue.

> Batch jobs could hang at shuffle phase when max parallelism is really large
> ---
>
> Key: FLINK-32027
> URL: https://issues.apache.org/jira/browse/FLINK-32027
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.17.0
>Reporter: Yun Tang
>Priority: Critical
> Fix For: 1.17.1
>
> Attachments: image-2023-05-08-11-12-58-361.png
>
>
> In batch stream mode with adaptive batch schedule mode, If we set the max 
> parallelism large as 32768 (pipeline.max-parallelism), the job could hang at 
> the shuffle phase:
> It would hang for a long time and show "No bytes sent":
>  !image-2023-05-08-11-12-58-361.png! 
> After some time to debug, we can see the downstream operator did not receive 
> the end-of-partition event.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32027) Batch jobs could hang at shuffle phase when max parallelism is really large

2023-05-07 Thread Yun Tang (Jira)
Yun Tang created FLINK-32027:


 Summary: Batch jobs could hang at shuffle phase when max 
parallelism is really large
 Key: FLINK-32027
 URL: https://issues.apache.org/jira/browse/FLINK-32027
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.17.0
Reporter: Yun Tang
 Fix For: 1.17.1
 Attachments: image-2023-05-08-11-12-58-361.png

In batch stream mode with adaptive batch schedule mode, If we set the max 
parallelism large as 32768 (pipeline.max-parallelism), the job could hang at 
the shuffle phase:

It would hang for a long time and show "No bytes sent":
 !image-2023-05-08-11-12-58-361.png! 

After some time to debug, we can see the downstream operator did not receive 
the end-of-partition event.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32023) execution.buffer-timeout cannot be set to -1 ms

2023-05-07 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720377#comment-17720377
 ] 

Rui Fan commented on FLINK-32023:
-

Thanks [~wanglijie] 's feedback.

Sorry, after my double check, I see the `java.time.Duration` supports negative 
value, and it isn't supported in flink side. So it's reasonable making the 
{{TimeUtils#parseDuration}} support parsing negative duration.

 

> execution.buffer-timeout cannot be set to -1 ms
> ---
>
> Key: FLINK-32023
> URL: https://issues.apache.org/jira/browse/FLINK-32023
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Liu
>Priority: Major
>
> The desc for execution.buffer-timeout is as following:
> {code:java}
> public static final ConfigOption BUFFER_TIMEOUT =
> ConfigOptions.key("execution.buffer-timeout")
> .durationType()
> .defaultValue(Duration.ofMillis(100))
> .withDescription(
> Description.builder()
> .text(
> "The maximum time frequency 
> (milliseconds) for the flushing of the output buffers. By default "
> + "the output buffers flush 
> frequently to provide low latency and to aid smooth developer "
> + "experience. Setting the 
> parameter can result in three logical modes:")
> .list(
> text(
> "A positive value triggers 
> flushing periodically by that interval"),
> text(
> FLUSH_AFTER_EVERY_RECORD
> + " triggers flushing 
> after every record thus minimizing latency"),
> text(
> 
> DISABLED_NETWORK_BUFFER_TIMEOUT
> + " ms triggers 
> flushing only when the output buffer is full thus maximizing "
> + "throughput"))
> .build()); {code}
> When we set execution.buffer-timeout to -1 ms, the following error is 
> reported:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Could not parse value '-1 ms' 
> for key 'execution.buffer-timeout'.
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:856)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.configure(StreamExecutionEnvironment.java:822)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.(StreamExecutionEnvironment.java:224)
>     at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.(StreamContextEnvironment.java:51)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createStreamExecutionEnvironment(StreamExecutionEnvironment.java:1996)
>     at java.util.Optional.orElseGet(Optional.java:267)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(StreamExecutionEnvironment.java:1986)
>     at com.kuaishou.flink.examples.api.WordCount.main(WordCount.java:27)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:327)
>     ... 11 more
> Caused by: java.lang.NumberFormatException: text does not start with a number
>     at org.apache.flink.util.TimeUtils.parseDuration(TimeUtils.java:78)
>     at 
> org.apache.flink.configuration.Configuration.convertToDuration(Configuration.java:1058)
>     at 
> org.apache.flink.configuration.Configuration.convertValue(Configuration.java:996)
>     at 
> org.apache.flink.configuration.Configuration.lambda$getOptional$2(Configuration.java:853)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:853)
>     ... 23 more {code}
> The reason is that the value for Duration can not be negative. We should 
> change the behavior or support to trigger flushing only when the output 
> buffer is full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32023) execution.buffer-timeout cannot be set to -1 ms

2023-05-07 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-32023:

Affects Version/s: 1.16.1
   1.17.0
   1.18.0
   Issue Type: Bug  (was: Improvement)

> execution.buffer-timeout cannot be set to -1 ms
> ---
>
> Key: FLINK-32023
> URL: https://issues.apache.org/jira/browse/FLINK-32023
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Liu
>Priority: Major
>
> The desc for execution.buffer-timeout is as following:
> {code:java}
> public static final ConfigOption BUFFER_TIMEOUT =
> ConfigOptions.key("execution.buffer-timeout")
> .durationType()
> .defaultValue(Duration.ofMillis(100))
> .withDescription(
> Description.builder()
> .text(
> "The maximum time frequency 
> (milliseconds) for the flushing of the output buffers. By default "
> + "the output buffers flush 
> frequently to provide low latency and to aid smooth developer "
> + "experience. Setting the 
> parameter can result in three logical modes:")
> .list(
> text(
> "A positive value triggers 
> flushing periodically by that interval"),
> text(
> FLUSH_AFTER_EVERY_RECORD
> + " triggers flushing 
> after every record thus minimizing latency"),
> text(
> 
> DISABLED_NETWORK_BUFFER_TIMEOUT
> + " ms triggers 
> flushing only when the output buffer is full thus maximizing "
> + "throughput"))
> .build()); {code}
> When we set execution.buffer-timeout to -1 ms, the following error is 
> reported:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Could not parse value '-1 ms' 
> for key 'execution.buffer-timeout'.
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:856)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.configure(StreamExecutionEnvironment.java:822)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.(StreamExecutionEnvironment.java:224)
>     at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.(StreamContextEnvironment.java:51)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createStreamExecutionEnvironment(StreamExecutionEnvironment.java:1996)
>     at java.util.Optional.orElseGet(Optional.java:267)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(StreamExecutionEnvironment.java:1986)
>     at com.kuaishou.flink.examples.api.WordCount.main(WordCount.java:27)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:327)
>     ... 11 more
> Caused by: java.lang.NumberFormatException: text does not start with a number
>     at org.apache.flink.util.TimeUtils.parseDuration(TimeUtils.java:78)
>     at 
> org.apache.flink.configuration.Configuration.convertToDuration(Configuration.java:1058)
>     at 
> org.apache.flink.configuration.Configuration.convertValue(Configuration.java:996)
>     at 
> org.apache.flink.configuration.Configuration.lambda$getOptional$2(Configuration.java:853)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:853)
>     ... 23 more {code}
> The reason is that the value for Duration can not be negative. We should 
> change the behavior or support to trigger flushing only when the output 
> buffer is full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32023) execution.buffer-timeout cannot be set to -1 ms

2023-05-07 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720375#comment-17720375
 ] 

Rui Fan commented on FLINK-32023:
-

Hi [~Jiangang] , thanks for your report. It's really a bug. Would you like to 
fix it?

>From the perspective of Duration, that the value for Duration can not be 
>negative is reasonable. So I prefer choose a special time as the flag of the 
>disable buffer timeout mechanism. WDYT?

cc [~dwysakowicz] , maybe you are also interested in this bug. :)

> execution.buffer-timeout cannot be set to -1 ms
> ---
>
> Key: FLINK-32023
> URL: https://issues.apache.org/jira/browse/FLINK-32023
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Liu
>Priority: Major
>
> The desc for execution.buffer-timeout is as following:
> {code:java}
> public static final ConfigOption BUFFER_TIMEOUT =
> ConfigOptions.key("execution.buffer-timeout")
> .durationType()
> .defaultValue(Duration.ofMillis(100))
> .withDescription(
> Description.builder()
> .text(
> "The maximum time frequency 
> (milliseconds) for the flushing of the output buffers. By default "
> + "the output buffers flush 
> frequently to provide low latency and to aid smooth developer "
> + "experience. Setting the 
> parameter can result in three logical modes:")
> .list(
> text(
> "A positive value triggers 
> flushing periodically by that interval"),
> text(
> FLUSH_AFTER_EVERY_RECORD
> + " triggers flushing 
> after every record thus minimizing latency"),
> text(
> 
> DISABLED_NETWORK_BUFFER_TIMEOUT
> + " ms triggers 
> flushing only when the output buffer is full thus maximizing "
> + "throughput"))
> .build()); {code}
> When we set execution.buffer-timeout to -1 ms, the following error is 
> reported:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Could not parse value '-1 ms' 
> for key 'execution.buffer-timeout'.
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:856)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.configure(StreamExecutionEnvironment.java:822)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.(StreamExecutionEnvironment.java:224)
>     at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.(StreamContextEnvironment.java:51)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createStreamExecutionEnvironment(StreamExecutionEnvironment.java:1996)
>     at java.util.Optional.orElseGet(Optional.java:267)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(StreamExecutionEnvironment.java:1986)
>     at com.kuaishou.flink.examples.api.WordCount.main(WordCount.java:27)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:327)
>     ... 11 more
> Caused by: java.lang.NumberFormatException: text does not start with a number
>     at org.apache.flink.util.TimeUtils.parseDuration(TimeUtils.java:78)
>     at 
> org.apache.flink.configuration.Configuration.convertToDuration(Configuration.java:1058)
>     at 
> org.apache.flink.configuration.Configuration.convertValue(Configuration.java:996)
>     at 
> org.apache.flink.configuration.Configuration.lambda$getOptional$2(Configuration.java:853)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:853)
>     ... 23 more {code}
> The reason is that the value for Duration can not be negative. We should 
> change the behavior or support to trigger flushing only when the output 
> buffer is full.



--
This message was sent by Atlassian Jira

[jira] [Updated] (FLINK-32026) jdbc-3.1.0-1.16 left join conversion error

2023-05-07 Thread xueyongyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xueyongyang updated FLINK-32026:

Issue Type: Bug  (was: Improvement)

> jdbc-3.1.0-1.16 left join conversion error
> --
>
> Key: FLINK-32026
> URL: https://issues.apache.org/jira/browse/FLINK-32026
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.1.0
> Environment: flink1.16.1
> mysql8.0.33
> jdbc-3.1.0-1.16 
>Reporter: xueyongyang
>Priority: Major
>
> I have a sql,
> insert into test_flink_res2(id,name,address) 
> select a.id,a.name,a.address from test_flink_res1 a left join test_flink_res2 
> b on a.id=b.id where a.name='abc0.11317691217472489' and b.id is null;
> *Why does flinksql convert this statement into the following statement?*
> SELECT `address` FROM `test_flink_res1` WHERE ((`name` = 
> 'abc0.11317691217472489')) AND ((`id` IS NULL))
> *As a result, there is no data in test_flink_res2,why?*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32026) jdbc-3.1.0-1.16 left join conversion error

2023-05-07 Thread xueyongyang (Jira)
xueyongyang created FLINK-32026:
---

 Summary: jdbc-3.1.0-1.16 left join conversion error
 Key: FLINK-32026
 URL: https://issues.apache.org/jira/browse/FLINK-32026
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Affects Versions: jdbc-3.1.0
 Environment: flink1.16.1

mysql8.0.33

jdbc-3.1.0-1.16 
Reporter: xueyongyang


I have a sql,


insert into test_flink_res2(id,name,address) 
select a.id,a.name,a.address from test_flink_res1 a left join test_flink_res2 b 
on a.id=b.id where a.name='abc0.11317691217472489' and b.id is null;

*Why does flinksql convert this statement into the following statement?*

SELECT `address` FROM `test_flink_res1` WHERE ((`name` = 
'abc0.11317691217472489')) AND ((`id` IS NULL))

*As a result, there is no data in test_flink_res2,why?*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32023) execution.buffer-timeout cannot be set to -1 ms

2023-05-07 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720374#comment-17720374
 ] 

Lijie Wang commented on FLINK-32023:


[~Jiangang] Thanks for reporting this. Would you like to prepare a fix ?  I 
think we should make the {{TimeUtils#parseDuration}} support parsing negative 
duration.

> execution.buffer-timeout cannot be set to -1 ms
> ---
>
> Key: FLINK-32023
> URL: https://issues.apache.org/jira/browse/FLINK-32023
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Liu
>Priority: Major
>
> The desc for execution.buffer-timeout is as following:
> {code:java}
> public static final ConfigOption BUFFER_TIMEOUT =
> ConfigOptions.key("execution.buffer-timeout")
> .durationType()
> .defaultValue(Duration.ofMillis(100))
> .withDescription(
> Description.builder()
> .text(
> "The maximum time frequency 
> (milliseconds) for the flushing of the output buffers. By default "
> + "the output buffers flush 
> frequently to provide low latency and to aid smooth developer "
> + "experience. Setting the 
> parameter can result in three logical modes:")
> .list(
> text(
> "A positive value triggers 
> flushing periodically by that interval"),
> text(
> FLUSH_AFTER_EVERY_RECORD
> + " triggers flushing 
> after every record thus minimizing latency"),
> text(
> 
> DISABLED_NETWORK_BUFFER_TIMEOUT
> + " ms triggers 
> flushing only when the output buffer is full thus maximizing "
> + "throughput"))
> .build()); {code}
> When we set execution.buffer-timeout to -1 ms, the following error is 
> reported:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: Could not parse value '-1 ms' 
> for key 'execution.buffer-timeout'.
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:856)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.configure(StreamExecutionEnvironment.java:822)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.(StreamExecutionEnvironment.java:224)
>     at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.(StreamContextEnvironment.java:51)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createStreamExecutionEnvironment(StreamExecutionEnvironment.java:1996)
>     at java.util.Optional.orElseGet(Optional.java:267)
>     at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(StreamExecutionEnvironment.java:1986)
>     at com.kuaishou.flink.examples.api.WordCount.main(WordCount.java:27)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:327)
>     ... 11 more
> Caused by: java.lang.NumberFormatException: text does not start with a number
>     at org.apache.flink.util.TimeUtils.parseDuration(TimeUtils.java:78)
>     at 
> org.apache.flink.configuration.Configuration.convertToDuration(Configuration.java:1058)
>     at 
> org.apache.flink.configuration.Configuration.convertValue(Configuration.java:996)
>     at 
> org.apache.flink.configuration.Configuration.lambda$getOptional$2(Configuration.java:853)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.configuration.Configuration.getOptional(Configuration.java:853)
>     ... 23 more {code}
> The reason is that the value for Duration can not be negative. We should 
> change the behavior or support to trigger flushing only when the output 
> buffer is full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] TanYuxin-tyx commented on pull request #22427: [FLINK-30815][tests] Migrate BatchAbstractTestBase and BatchTestBase to junit5

2023-05-07 Thread via GitHub


TanYuxin-tyx commented on PR #22427:
URL: https://github.com/apache/flink/pull/22427#issuecomment-1537644742

   @godfreyhe @JunRuiLee Thanks for helping review the change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] X-czh commented on a diff in pull request #586: [FLINK-32002] Adjust autoscaler defaults for release

2023-05-07 Thread via GitHub


X-czh commented on code in PR #586:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/586#discussion_r1186953244


##
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/config/AutoScalerOptions.java:
##
@@ -87,28 +88,28 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 public static final ConfigOption VERTEX_MAX_PARALLELISM =
 autoScalerConfig("vertex.max-parallelism")
 .intType()
-.defaultValue(Integer.MAX_VALUE)
+.defaultValue(200)
 .withDescription(
 "The maximum parallelism the autoscaler can use. 
Note that this limit will be ignored if it is higher than the max parallelism 
configured in the Flink config or directly on each operator.");
 
 public static final ConfigOption MAX_SCALE_DOWN_FACTOR =
 autoScalerConfig("scale-down.max-factor")
 .doubleType()
-.defaultValue(0.6)
+.defaultValue(1.0)

Review Comment:
   Curious why we choose to loose it to 1.0. We found in that TPR tends to be 
overestimated a lot and leading to overly aggressive downscaling when:
   
   - the pipeline is underloaded and far from optimal.
   - Avg CPU allocated per slot is < 1.
   
   The reason is that linear scaling with busy time metrics assumes no resource 
competition between tasks when we pushing up the loads, however, when avg CPU 
allocated per slot is small, resource competition beween tasks will be more and 
more severve as we pushing up the overall loads of the pipeline.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] X-czh commented on a diff in pull request #586: [FLINK-32002] Adjust autoscaler defaults for release

2023-05-07 Thread via GitHub


X-czh commented on code in PR #586:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/586#discussion_r1186950803


##
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/config/AutoScalerOptions.java:
##
@@ -68,15 +68,16 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 public static final ConfigOption TARGET_UTILIZATION_BOUNDARY =
 autoScalerConfig("target.utilization.boundary")
 .doubleType()
-.defaultValue(0.1)
+.defaultValue(0.4)

Review Comment:
   This makes the default upper boundary to be 1.0, will this block all 
upscaling?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] X-czh commented on a diff in pull request #586: [FLINK-32002] Adjust autoscaler defaults for release

2023-05-07 Thread via GitHub


X-czh commented on code in PR #586:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/586#discussion_r1186950803


##
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/config/AutoScalerOptions.java:
##
@@ -68,15 +68,16 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 public static final ConfigOption TARGET_UTILIZATION_BOUNDARY =
 autoScalerConfig("target.utilization.boundary")
 .doubleType()
-.defaultValue(0.1)
+.defaultValue(0.4)

Review Comment:
   This makes the upper boundary to be 1.0, will this block all upscaling?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31997) Update to Fabric8 6.5.1+ in flink-kubernetes

2023-05-07 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720364#comment-17720364
 ] 

Thomas Weise commented on FLINK-31997:
--

[~gyfora] would it be possible to also shade the fabric8 dependency that goes 
into flink-kubernetes.jar? Otherwise there is the possibility of running into 
class path conflicts when the application also has a fabric8 dependency as I 
have seen in at least one case.

> Update to Fabric8 6.5.1+ in flink-kubernetes
> 
>
> Key: FLINK-31997
> URL: https://issues.apache.org/jira/browse/FLINK-31997
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>
> We should update the fabric8 version in flink-kubernetes to at least 6.5.1. 
> Flink currently uses a very old fabric8 version. The fabric8 library 
> dependencies have since been revised and greately improved to make them more 
> moduler and allow eliminating securitiy vulnerabilities more easily like: 
> https://issues.apache.org/jira/browse/FLINK-31815
> The newer versions especially 6.5.1 + also add some improvement stability 
> fixes for watches and other parts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32025) Make job cancellation button on UI configurable

2023-05-07 Thread Ted Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved FLINK-32025.

Resolution: Duplicate

> Make job cancellation button on UI configurable
> ---
>
> Key: FLINK-32025
> URL: https://issues.apache.org/jira/browse/FLINK-32025
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Major
>
> On the flink job UI, there is `Cancel Job` button.
> When the job UI is shown to users, it is desirable to hide the button so that 
> normal user doesn't mistakenly cancel a long running flink job.
> This issue adds configuration for hiding the `Cancel Job` button.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32025) Make job cancellation button on UI configurable

2023-05-07 Thread Echo Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720362#comment-17720362
 ] 

Echo Lee commented on FLINK-32025:
--

It is already possible to hide the 'cancel job' button by setting 
*web.cancel.enable=false*

> Make job cancellation button on UI configurable
> ---
>
> Key: FLINK-32025
> URL: https://issues.apache.org/jira/browse/FLINK-32025
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Major
>
> On the flink job UI, there is `Cancel Job` button.
> When the job UI is shown to users, it is desirable to hide the button so that 
> normal user doesn't mistakenly cancel a long running flink job.
> This issue adds configuration for hiding the `Cancel Job` button.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32025) Make job cancellation button on UI configurable

2023-05-07 Thread Ted Yu (Jira)
Ted Yu created FLINK-32025:
--

 Summary: Make job cancellation button on UI configurable
 Key: FLINK-32025
 URL: https://issues.apache.org/jira/browse/FLINK-32025
 Project: Flink
  Issue Type: Improvement
Reporter: Ted Yu


On the flink job UI, there is `Cancel Job` button.

When the job UI is shown to users, it is desirable to hide the button so that 
normal user doesn't mistakenly cancel a long running flink job.

This issue adds configuration for hiding the `Cancel Job` button.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29847) Store/cache JarUploadResponseBody

2023-05-07 Thread xiaochen zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720328#comment-17720328
 ] 

xiaochen zhou commented on FLINK-29847:
---

hi, I would like to give a try on this, can  I take this ticket?I will try my 
best to complete it
 

> Store/cache JarUploadResponseBody
> -
>
> Key: FLINK-29847
> URL: https://issues.apache.org/jira/browse/FLINK-29847
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Daren Wong
>Priority: Major
>
> Kubernetes operator currently uploadJar always even when the same JAR has 
> been uploaded to JM previously. For example, this occurs during rollback and 
> suspend-run operation.
> To improve the performance, we want to cache the JarUploadResponseBody so 
> that in the next runJar operation, kubernetes operator will check the cache 
> and reuse the jar if it's the same JAR. This "cache" can be as simple as 
> storing the uploaded JarFilePath in the CR Status field.
> Ref: 
> [https://github.com/apache/flink-kubernetes-operator/blob/main/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java#L188-L199]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31967) SQL with LAG function NullPointerException

2023-05-07 Thread padavan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720303#comment-17720303
 ] 

padavan commented on FLINK-31967:
-

[~martijnvisser] [~jark] [~lincoln.86xy] 

If you need anything else, let me know, i'll do it. :)

> SQL with LAG function NullPointerException
> --
>
> Key: FLINK-31967
> URL: https://issues.apache.org/jira/browse/FLINK-31967
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: padavan
>Priority: Major
> Attachments: image-2023-04-28-14-46-19-736.png, 
> image-2023-04-28-15-06-48-184.png, image-2023-04-28-15-14-58-788.png, 
> image-2023-04-28-15-17-49-144.png, image-2023-04-28-17-06-20-737.png, 
> simpleFlinkKafkaLag.zip
>
>
> I want to make a query with the LAG function. And got Job Exception without 
> any explanations.
>  
> *Code:*
> {code:java}
> private static void t1_LeadLag(DataStream ds, 
> StreamExecutionEnvironment env) {
> StreamTableEnvironment te = StreamTableEnvironment.create(env);
> Table t = te.fromDataStream(ds, 
> Schema.newBuilder().columnByExpression("proctime", "proctime()").build());
> te.createTemporaryView("users", t);
> Table res = te.sqlQuery("SELECT userId, `count`,\n" +
> " LAG(`count`) OVER (PARTITION BY userId ORDER BY proctime) AS 
> prev_quantity\n" +
> " FROM users");
> te.toChangelogStream(res).print();
> }{code}
>  
> *Input:*
> {"userId":3,"count":0,"dt":"2023-04-28T07:44:21.551Z"}
>  
> *Exception:* I remove part about basic JobExecutionException and kept the 
> important(i think)
> {code:java}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149)
> at 
> org.apache.flink.table.data.RowData.lambda$createFieldGetter$245ca7d1$6(RowData.java:245)
> at 
> org$apache$flink$table$runtime$functions$aggregate$LagAggFunction$LagAcc$2$Converter.toExternal(Unknown
>  Source)
> at 
> org.apache.flink.table.data.conversion.StructuredObjectConverter.toExternal(StructuredObjectConverter.java:101)
> at UnboundedOverAggregateHelper$15.setAccumulators(Unknown Source)
> at 
> org.apache.flink.table.runtime.operators.over.ProcTimeUnboundedPrecedingFunction.processElement(ProcTimeUnboundedPrecedingFunction.java:92)
> at 
> org.apache.flink.table.runtime.operators.over.ProcTimeUnboundedPrecedingFunction.processElement(ProcTimeUnboundedPrecedingFunction.java:42)
> at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:83)
> at 
> org.apache.flink.streaming.runtime.io.RecordProcessorUtils.lambda$getRecordProcessor$0(RecordProcessorUtils.java:60)
> at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:237)
> at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:146)
> at 
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:110)
> at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:550)
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:839)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:788)
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
> at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:931)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
> at java.base/java.lang.Thread.run(Thread.java:829){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] reswqa merged pull request #47: [BP-3.0][hotfix] Python connector download link should refer to the url defined in externalized repository

2023-05-07 Thread via GitHub


reswqa merged PR #47:
URL: https://github.com/apache/flink-connector-pulsar/pull/47


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa merged pull request #46: [hotfix] Python connector download link should refer to the url defined in externalized repository

2023-05-07 Thread via GitHub


reswqa merged PR #46:
URL: https://github.com/apache/flink-connector-pulsar/pull/46


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] tisonkun merged pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


tisonkun merged PR #45:
URL: https://github.com/apache/flink-connector-pulsar/pull/45


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


reswqa commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186793455


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   > I'd prefer to duplicate for now so that we can capture other failure in 
daily CI if any. We can always revert the duplicate when 
[FLINK-31804](https://issues.apache.org/jira/browse/FLINK-31804) properly 
backport, instead of waiting here and letting the daily CI fails fast and 
conceal other possible issues.
   
   Yes, make sense. If we decided to backport `FLINK-31804` finally, let's 
revert the duplicate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


reswqa commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186793455


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   > I'd prefer to duplicate for now so that we can capture other failure in 
daily CI if any. We can always revert the duplicate when 
[FLINK-31804](https://issues.apache.org/jira/browse/FLINK-31804) properly 
backport, instead of waiting here and letting the daily CI fails fast and 
conceal other possible issues.
   
   Yes, make sense.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32024) Short code related to externalized connector retrieve version from its own data yaml

2023-05-07 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-32024:
---
Summary: Short code related to externalized connector retrieve version from 
its own data yaml  (was: Short code related to external connector retrieve 
version from its own data yaml)

> Short code related to externalized connector retrieve version from its own 
> data yaml
> 
>
> Key: FLINK-32024
> URL: https://issues.apache.org/jira/browse/FLINK-32024
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.18.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>
> Currently, we have some shortcodes specifically designed for externalized 
> connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
> etc.
> When using them, we need to pass in a version number, such as 
> {{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget 
> to modify the corresponding version in the document when releasing a new 
> version.
> Of course, we can hard code these into the release process. But perhaps we 
> can introduce a version field to {{docs/data/connector_name.yml}} and let 
> flink directly reads the corresponding version when rendering shortcode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32024) Short code related to external connector retrieve version from its own data yaml

2023-05-07 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-32024:
---
Description: 
Currently, we have some shortcodes specifically designed for externalized 
connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
etc.

When using them, we need to pass in a version number, such as 
{{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget to 
modify the corresponding version in the document when releasing a new version.

Of course, we can hard code these into the release process. But perhaps we can 
introduce a version field to {{docs/data/connector_name.yml}}, where flink 
directly reads the corresponding version when rendering shortcode.

  was:
Currently, we have some shortcodes specifically designed for external 
connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
etc.

When using them, we need to pass in a version number, such as 
{{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget to 
modify the corresponding version in the document when releasing a new version.

Of course, we can hard code these into the release process. But perhaps we can 
introduce a version field to {{docs/data/connector_name.yml}}, where flink 
directly reads the corresponding version when rendering shortcode.


> Short code related to external connector retrieve version from its own data 
> yaml
> 
>
> Key: FLINK-32024
> URL: https://issues.apache.org/jira/browse/FLINK-32024
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.18.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>
> Currently, we have some shortcodes specifically designed for externalized 
> connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
> etc.
> When using them, we need to pass in a version number, such as 
> {{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget 
> to modify the corresponding version in the document when releasing a new 
> version.
> Of course, we can hard code these into the release process. But perhaps we 
> can introduce a version field to {{docs/data/connector_name.yml}}, where 
> flink directly reads the corresponding version when rendering shortcode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32024) Short code related to external connector retrieve version from its own data yaml

2023-05-07 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-32024:
---
Description: 
Currently, we have some shortcodes specifically designed for externalized 
connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
etc.

When using them, we need to pass in a version number, such as 
{{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget to 
modify the corresponding version in the document when releasing a new version.

Of course, we can hard code these into the release process. But perhaps we can 
introduce a version field to {{docs/data/connector_name.yml}} and let flink 
directly reads the corresponding version when rendering shortcode.

  was:
Currently, we have some shortcodes specifically designed for externalized 
connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
etc.

When using them, we need to pass in a version number, such as 
{{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget to 
modify the corresponding version in the document when releasing a new version.

Of course, we can hard code these into the release process. But perhaps we can 
introduce a version field to {{docs/data/connector_name.yml}}, where flink 
directly reads the corresponding version when rendering shortcode.


> Short code related to external connector retrieve version from its own data 
> yaml
> 
>
> Key: FLINK-32024
> URL: https://issues.apache.org/jira/browse/FLINK-32024
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.18.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>
> Currently, we have some shortcodes specifically designed for externalized 
> connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
> etc.
> When using them, we need to pass in a version number, such as 
> {{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget 
> to modify the corresponding version in the document when releasing a new 
> version.
> Of course, we can hard code these into the release process. But perhaps we 
> can introduce a version field to {{docs/data/connector_name.yml}} and let 
> flink directly reads the corresponding version when rendering shortcode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32024) Short code related to external connector retrieve version from its own data yaml

2023-05-07 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-32024:
--

 Summary: Short code related to external connector retrieve version 
from its own data yaml
 Key: FLINK-32024
 URL: https://issues.apache.org/jira/browse/FLINK-32024
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Common
Affects Versions: 1.18.0
Reporter: Weijie Guo
Assignee: Weijie Guo


Currently, we have some shortcodes specifically designed for external 
connector, such as {{connectors_artifact}}, {{sql_connector_download_table}}, 
etc.

When using them, we need to pass in a version number, such as 
{{sql_connector_download_table "pulsar" 3.0.0}}.  It's easy for us to forget to 
modify the corresponding version in the document when releasing a new version.

Of course, we can hard code these into the release process. But perhaps we can 
introduce a version field to {{docs/data/connector_name.yml}}, where flink 
directly reads the corresponding version when rendering shortcode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] tisonkun commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


tisonkun commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186792428


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   > I thought about it again, and feel a bit that maybe FLINK-31804 should be 
merged into 1.16 and 1.17. Because to some extent, this (without considering 
the MiniClusterTestEnvironment) is a bug for ITCaseRules. If it were backported 
to these versions, there is no need to duplicate it. 樂 WDYT?
   
   No need to duplicate. But still need to update the message.
   
   I'd prefer to duplicate for now so that we can capture other failure in 
daily CI if any. We can always revert the duplicate when FLINK-31804 properly 
backport, instead of waiting here and letting the daily CI fails fast and 
conceal other possible issues.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] reswqa opened a new pull request, #72: [hotfix] Python connector download link should refer to the url defined in externalized repository

2023-05-07 Thread via GitHub


reswqa opened a new pull request, #72:
URL: https://github.com/apache/flink-connector-aws/pull/72

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] reswqa opened a new pull request, #71: [hotfix] Python connector download link should refer to the url defined in externalized repository

2023-05-07 Thread via GitHub


reswqa opened a new pull request, #71:
URL: https://github.com/apache/flink-connector-aws/pull/71

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


reswqa commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186787932


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   I thought about it again, and feel a bit that maybe `FLINK-31804` should be 
merged into 1.16 and 1.17. Because to some extent, this (without considering 
the `MiniClusterTestEnvironment`) is a bug for `ITCaseRules`. If it were 
backported to these versions, there is no need to duplicate it. 樂 WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


reswqa commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186787932


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   I thought about it again, and feel a bit that maybe `FLINK-31804` should be 
merged into 1.16 and 1.17. Because to some extent, this (without considering 
the `MiniClusterTestEnvironment`) is a bug for `ITCaseRules`. If it were 
backported to these versions, there is no need to duplicate it. 樂 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


reswqa commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186787331


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   The reason why we need to duplicate rather than directly modify is that our 
CI will support multiple versions of flink? 
   
   Given that `FLINK-31804` has only been merged into the master(1.18), all 
external connectors that need to suppress this violation should also have the 
same problem.
   
   Perhaps temporary duplicate is acceptable, as it will continue until the 
workflow for `flink_version < 1.18` is no longer running.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] reswqa commented on a diff in pull request #45: [hotfix] Workaround new violations message

2023-05-07 Thread via GitHub


reswqa commented on code in PR #45:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/45#discussion_r1186787932


##
flink-connector-pulsar/archunit-violations/f4d91193-72ba-4ce4-ad83-98f780dce581:
##
@@ -10,3 +10,15 @@ org.apache.flink.connector.pulsar.source.PulsarSourceITCase 
does not satisfy: on
 * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension\
 * reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension\
  or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
+org.apache.flink.connector.pulsar.sink.PulsarSinkITCase does not satisfy: only 
one of the following predicates match:\

Review Comment:
   I thought about it again, and feel a bit that maybe `FLINK-31804` should be 
merged into 1.16 and 1.17. Because to some extent, this (without considering 
the `MiniClusterTestEnvironment`) is a bug for `ITCaseRules`. If it were 
backported to these versions, wouldn't there be no such duplicate. 樂 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org