Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-23 Thread via GitHub


ruanhang1993 commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1369687645


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.

Review Comment:
   ```suggestion
   This approach also simplifies matters for users, as they only need to manage 
one ConfigOption for display control.
   ```



##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.
+
+ Introduce Flink Jdbc Driver For Sql Gateway
+# [FLINK-31496](https://issues.apache.org/jira/browse/FLINK-31496)
+Apache Flink now supports JDBC driver to access sql-gateway, you can use the 
driver in any cases that
+support standard JDBC extension to connect to Flink cluster.
+
+ Extend watermark-related features for SQL
+# [FLINK-31535](https://issues.apache.org/jira/browse/FLINK-31535)
+Flink now enables user config watermark emit strategy/watermark 
alignment/watermark idle-timeout
+in Flink sql job with dynamic table options and 'Options' hint.
+
+ Support configuring CatalogStore in Table API
+# [FLINK-32431](https://issues.apache.org/jira/browse/FLINK-32431)
+Support lazy initialization of catalog and persistence of catalog 
configuration.
+
+ Deprecate ManagedTable related APIs
+# [FLINK-32656](https://issues.apache.org/jira/browse/FLINK-32656)
+ManagedTable related APIs are deprecated and will be removed in a future major 
release.
+
+### Connectors & Libraries
+
+ SplitReader doesn't extend AutoCloseable but implements close
+# [FLINK-31015](https://issues.apache.org/jira/browse/FLINK-31015)
+SplitReader interface now extends AutoCloseable instead of providing its own 
method signature.
+
+ JSON format supports projection push down
+# [FLINK-32610](https://issues.apache.org/jira/browse/FLINK-32610)
+The JSON format introduced JsonParser as a new default way to deserialize JSON 

Re: [PR] [FLINK-32107] [Tests] Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-23 Thread via GitHub


victor9309 commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1776631419

   
   Thanks @XComp for the review. Thank you very much for your suggestions, add 
a generic download and retry function (retry_times_with_download), change 
crictl and/or the cri-dockerd-version.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32107] [Tests] Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-23 Thread via GitHub


victor9309 commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1776628420

   Thanks @XComp for the review. add a generic download and retry function 
(`retry_times_with_download`), change `crictl `and/or the 
`cri-dockerd-version`; 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33007) Integrate autoscaler config validation into the general validator flow

2023-10-23 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-33007.
--
Resolution: Fixed

merged to main e905a1b84421710d9de5a886ecab10834cc24364

> Integrate autoscaler config validation into the general validator flow
> --
>
> Key: FLINK-33007
> URL: https://issues.apache.org/jira/browse/FLINK-33007
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Praneeth Ramesh
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.7.0
>
>
> Currently autoscaler configs are not validated at all but cause runtime 
> failures of the autoscaler mechanism. 
> We should create a custom autoscaler config validator plugin and hook it up 
> into the core validation flow
>  
> As part of this we should start validating the percentage based config ranges



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33007] Integrate autoscaler config validation into the general validator flow [flink-kubernetes-operator]

2023-10-23 Thread via GitHub


gyfora merged PR #682:
URL: https://github.com/apache/flink-kubernetes-operator/pull/682


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32611] Redirect to Apache Paimon's link instead of legacy flink table store [flink-web]

2023-10-23 Thread via GitHub


Myasuka commented on PR #665:
URL: https://github.com/apache/flink-web/pull/665#issuecomment-1776598050

   @carp84 Thanks for the review, I had already removed the links to the legacy 
Paimon (flink-table-store) and only left a link to the incubating-paimon doc. 
From the discussion, I think guys have similar ideas to drop the legacy links, 
and it seems no obvious opinion against to link to the latest paimon docs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33187] using hashcode for parallelism map comparison [flink-kubernetes-operator]

2023-10-23 Thread via GitHub


gyfora commented on code in PR #685:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/685#discussion_r1369655435


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/config/AutoScalerOptions.java:
##
@@ -201,8 +201,8 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 .withDescription(
 "A (semicolon-separated) list of vertex ids in 
hexstring for which to disable scaling. Caution: For non-sink vertices this 
will still scale their downstream operators until 
https://issues.apache.org/jira/browse/FLINK-31215 is implemented.");
 
-public static final ConfigOption SCALING_REPORT_INTERVAL =
-autoScalerConfig("scaling.report.interval")
+public static final ConfigOption SCALING_EVENT_INTERVAL =
+autoScalerConfig("scaling.event.interval")

Review Comment:
   Docs seem to be inconsistent with this and should regenerated after renaming 
to `event.interval`



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java:
##
@@ -197,4 +198,19 @@ private static Event buildEvent(
 .endMetadata()
 .build();
 }
+
+private static boolean intervalCheck(Event existing, @Nullable Duration 
interval) {
+return interval != null
+&& Instant.now()
+.isBefore(
+Instant.parse(existing.getLastTimestamp())
+.plusMillis(interval.toMillis()));
+}
+
+private static boolean labelCheck(
+Event existing, Predicate> dedupePredicate) {
+return dedupePredicate == null
+|| (existing.getMetadata() != null
+&& 
dedupePredicate.test(existing.getMetadata().getLabels()));
+}

Review Comment:
   I may misunderstand something but seems like labels are basically ignored 
when the `interval == null` . In that case intervalCheck is always false. Is 
this intentional? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33187] using hashcode for parallelism map comparison [flink-kubernetes-operator]

2023-10-23 Thread via GitHub


1996fanrui commented on PR #685:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/685#issuecomment-1776540163

   Hi, the ci fails, and please run the `mvn clean install -DskipTests 
-Pgenerate-docs` again, thanks
   
   
https://github.com/apache/flink-kubernetes-operator/actions/runs/6621898931/job/17987047236?pr=685#step:5:19405


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32611] Redirect to Apache Paimon's link instead of legacy flink table store [flink-web]

2023-10-23 Thread via GitHub


carp84 commented on PR #665:
URL: https://github.com/apache/flink-web/pull/665#issuecomment-1776537954

   @Myasuka I respect the consideration of recording the history, whereas the 
history is already recorded in the below ways (which explicitly show that 
Paimon origins from Flink Table Store), besides this PR:
   
   1. The incubation 
[proposal](https://cwiki.apache.org/confluence/display/INCUBATOR/PaimonProposal)
 and 
[discussion](https://lists.apache.org/thread/hr3d7tpw02w6ybrnnlf3hcbhfxotwpvn) 
of Paimon
   2. The 
[announcement](https://lists.apache.org/thread/pz5f9cvpyk4q9vltd7z088q5368v412t)
 that Flink Table Store joins Apache incubator as Paimon
   
   And we will send another email to our Flink user/dev mailing lists if one 
day Paimon graduates as a Top-Level-Project (you have my word).
   
   It's not bad to be nostalgic, but let's move forward to the future (smile)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33344][rpc] Replace Time with Duration in RpcInputSplitProvider [flink]

2023-10-23 Thread via GitHub


flinkbot commented on PR #23575:
URL: https://github.com/apache/flink/pull/23575#issuecomment-1776511783

   
   ## CI report:
   
   * 7587bd0af9db26b4024a5a60650e9d16236b511e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33344) Replace Time with Duration in RpcInputSplitProvider

2023-10-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33344:
---
Labels: pull-request-available  (was: )

> Replace Time with Duration in RpcInputSplitProvider
> ---
>
> Key: FLINK-33344
> URL: https://issues.apache.org/jira/browse/FLINK-33344
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / RPC
>Reporter: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33344][rpc] Replace Time with Duration in RpcInputSplitProvider [flink]

2023-10-23 Thread via GitHub


Jiabao-Sun opened a new pull request, #23575:
URL: https://github.com/apache/flink/pull/23575

   
   
   ## What is the purpose of the change
   [FLINK-33344][rpc] Replace Time with Duration in RpcInputSplitProvider
   
   ## Brief change log
   [FLINK-33344][rpc] Replace Time with Duration in RpcInputSplitProvider
   
   ## Verifying this change
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28303] Support LatestOffsetsInitializer to avoid latest-offset strategy lose data [flink-connector-kafka]

2023-10-23 Thread via GitHub


Tan-JiaLiang commented on code in PR #52:
URL: 
https://github.com/apache/flink-connector-kafka/pull/52#discussion_r1369609335


##
flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/split/KafkaPartitionSplit.java:
##
@@ -35,17 +35,18 @@
 public class KafkaPartitionSplit implements SourceSplit {
 public static final long NO_STOPPING_OFFSET = Long.MIN_VALUE;
 // Indicating the split should consume from the latest.
-public static final long LATEST_OFFSET = -1;
+// @deprecated Only be used for compatibility with the history state, see 
FLINK-28303
+@Deprecated public static final long LATEST_OFFSET = -1;
 // Indicating the split should consume from the earliest.
 public static final long EARLIEST_OFFSET = -2;
 // Indicating the split should consume from the last committed offset.
 public static final long COMMITTED_OFFSET = -3;
 
 // Valid special starting offsets
 public static final Set VALID_STARTING_OFFSET_MARKERS =
-new HashSet<>(Arrays.asList(EARLIEST_OFFSET, LATEST_OFFSET, 
COMMITTED_OFFSET));
+new HashSet<>(Arrays.asList(EARLIEST_OFFSET, COMMITTED_OFFSET));
 public static final Set VALID_STOPPING_OFFSET_MARKERS =
-new HashSet<>(Arrays.asList(LATEST_OFFSET, COMMITTED_OFFSET, 
NO_STOPPING_OFFSET));
+new HashSet<>(Arrays.asList(COMMITTED_OFFSET, NO_STOPPING_OFFSET));

Review Comment:
   I notice it's not merged yet, so I fixed it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33344) Replace Time with Duration in RpcInputSplitProvider

2023-10-23 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33344:
--

 Summary: Replace Time with Duration in RpcInputSplitProvider
 Key: FLINK-33344
 URL: https://issues.apache.org/jira/browse/FLINK-33344
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / RPC
Reporter: Jiabao Sun
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-27432][runtime] Replace Time with Duration in TaskSlotTable [flink]

2023-10-23 Thread via GitHub


flinkbot commented on PR #23574:
URL: https://github.com/apache/flink/pull/23574#issuecomment-1776481510

   
   ## CI report:
   
   * 9e5d095af6d4056d7f79c4315dcfbea6ef994fe9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27432][runtime] Replace Time with Duration in TaskSlotTable [flink]

2023-10-23 Thread via GitHub


Jiabao-Sun commented on PR #23574:
URL: https://github.com/apache/flink/pull/23574#issuecomment-1776475166

   Hi @XComp, @zentol, could you help review this when you have time?
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27432) Replace Time with Duration in TaskSlotTable

2023-10-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27432:
---
Labels: pull-request-available  (was: )

> Replace Time with Duration in TaskSlotTable
> ---
>
> Key: FLINK-27432
> URL: https://issues.apache.org/jira/browse/FLINK-27432
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-27432][runtime] Replace Time with Duration in TaskSlotTable [flink]

2023-10-23 Thread via GitHub


Jiabao-Sun opened a new pull request, #23574:
URL: https://github.com/apache/flink/pull/23574

   
   
   ## What is the purpose of the change
   [FLINK-27432][runtime] Replace Time with Duration in TaskSlotTable
   
   ## Brief change log
   [FLINK-27432][runtime] Replace Time with Duration in TaskSlotTable
   
   ## Verifying this change
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-24939) Support 'SHOW CREATE CATALOG' syntax

2023-10-23 Thread Feng Jin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778910#comment-17778910
 ] 

Feng Jin commented on FLINK-24939:
--

[~liyubin117]  +1 for supporting this. 

> Support 'SHOW CREATE CATALOG' syntax
> 
>
> Key: FLINK-24939
> URL: https://issues.apache.org/jira/browse/FLINK-24939
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Yubin Li
>Priority: Major
>
> SHOW CREATE CATALOG ;
>  
> `Catalog` is playing a more import role in flink, it would be great to get 
> existing catalog detail information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33187] using hashcode for parallelism map comparison [flink-kubernetes-operator]

2023-10-23 Thread via GitHub


clarax commented on PR #685:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/685#issuecomment-1776429332

   > Another thing we have been discussing with @1996fanrui
   > 
   > The interval config should probably be renamed 
`scaling.report.interval->scaling.event.interval` this way we can use it 
generally in the future for autoscaler triggered events.
   > 
   > We should also make sure that the simple `handleEvent` method also 
respects the interval if specified. And we should probably use the interval 
also for ineffective scaling events. I know that some of these changes are not 
directly related to this PR but it may be better to clean it up so we leave it 
in a good state afterwards.
   
   Resolved all requested changes. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-24939) Support 'SHOW CREATE CATALOG' syntax

2023-10-23 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778907#comment-17778907
 ] 

Yubin Li commented on FLINK-24939:
--

Hi, [~jark] [~hackergin] As `CatalogStore` introduced in 1.18, it is time to 
implement the wildly expected feature `show create catalog`. WDYT? 

> Support 'SHOW CREATE CATALOG' syntax
> 
>
> Key: FLINK-24939
> URL: https://issues.apache.org/jira/browse/FLINK-24939
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Yubin Li
>Priority: Major
>
> SHOW CREATE CATALOG ;
>  
> `Catalog` is playing a more import role in flink, it would be great to get 
> existing catalog detail information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-23 Thread via GitHub


1996fanrui commented on PR #22985:
URL: https://github.com/apache/flink/pull/22985#issuecomment-1776391285

   Would you mind updating the commit message to `[FLINK-21883][scheduler] 
Implement cooldown period for adaptive scheduler`?
   
   The current commit message missed the `module name` and `adaptive scheduler`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32671) Document Externalized Declarative Resource Management

2023-10-23 Thread ConradJam (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778894#comment-17778894
 ] 

ConradJam commented on FLINK-32671:
---

[~dmvk] sure let me take it and finish it

> Document Externalized Declarative Resource Management
> -
>
> Key: FLINK-32671
> URL: https://issues.apache.org/jira/browse/FLINK-32671
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Konstantin Knauf
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31275) Flink supports reporting and storage of source/sink tables relationship

2023-10-23 Thread Fang Yong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778890#comment-17778890
 ] 

Fang Yong commented on FLINK-31275:
---

[~ZhenqiuHuang] Sorry for the late reply, and please feel free to comment the 
issues if you have any idea or would like to take it, thanks

> Flink supports reporting and storage of source/sink tables relationship
> ---
>
> Key: FLINK-31275
> URL: https://issues.apache.org/jira/browse/FLINK-31275
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>
> FLIP-314 has been accepted 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-314%3A+Support+Customized+Job+Lineage+Listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] add announcement blog post for Flink 1.18 [flink-web]

2023-10-23 Thread via GitHub


lsyldliu commented on code in PR #680:
URL: https://github.com/apache/flink-web/pull/680#discussion_r1369460852


##
docs/content/posts/2023-10-10-release-1.18.0.md:
##
@@ -0,0 +1,542 @@
+---
+authors:
+- JingGe:
+  name: "Jing Ge"
+  twitter: jingengineer
+- KonstantinKnauf:
+  name: "Konstantin Knauf"
+  twitter: snntrable
+- SergeyNuyanzin:
+  name: "Sergey Nuyanzin"
+  twitter: uckamello
+- QingshengRen:
+  name: "Qingsheng Ren"
+  twitter: renqstuite
+date: "2023-10-10T08:00:00Z"
+subtitle: ""
+title: Announcing the Release of Apache Flink 1.18
+aliases:
+- /news/2023/10/10/release-1.18.0.html
+---
+
+The Apache Flink PMC is pleased to announce the release of Apache Flink 
1.18.0. As usual, we are looking at a packed 
+release with a wide variety of improvements and new features. Overall, 176 
people contributed to this release completing 
+18 FLIPS and 700+ issues. Thank you!
+
+Let's dive into the highlights.
+
+# Towards a Streaming Lakehouse
+
+## Flink SQL Improvements
+
+### Introduce Flink JDBC Driver For Sql Gateway 
+
+Flink 1.18 comes with a JDBC Driver for the Flink SQL Gateway. So, you can now 
use any SQL Client that supports JDBC to 
+interact with your tables via Flink SQL. Here is an example using 
[SQLLine](https://julianhyde.github.io/sqlline/manual.html). 
+
+```shell
+sqlline> !connect jdbc:flink://localhost:8083
+```
+
+```shell
+sqlline version 1.12.0
+sqlline> !connect jdbc:flink://localhost:8083
+Enter username for jdbc:flink://localhost:8083:
+Enter password for jdbc:flink://localhost:8083:
+0: jdbc:flink://localhost:8083> CREATE TABLE T(
+. . . . . . . . . . . . . . .)>  a INT,
+. . . . . . . . . . . . . . .)>  b VARCHAR(10)
+. . . . . . . . . . . . . . .)>  ) WITH (
+. . . . . . . . . . . . . . .)>  'connector' = 'filesystem',
+. . . . . . . . . . . . . . .)>  'path' = 'file:///tmp/T.csv',
+. . . . . . . . . . . . . . .)>  'format' = 'csv'
+. . . . . . . . . . . . . . .)>  );
+No rows affected (0.122 seconds)
+0: jdbc:flink://localhost:8083> INSERT INTO T VALUES (1, 'Hi'), (2, 'Hello');
++--+
+|  job id  |
++--+
+| fbade1ab4450fc57ebd5269fdf60dcfd |
++--+
+1 row selected (1.282 seconds)
+0: jdbc:flink://localhost:8083> SELECT * FROM T;
++---+---+
+| a |   b   |
++---+---+
+| 1 | Hi|
+| 2 | Hello |
++---+---+
+2 rows selected (1.955 seconds)
+0: jdbc:flink://localhost:8083>
+```
+
+**More Information**
+* 
[Documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/)
 
+* [FLIP-293: Introduce Flink Jdbc Driver For Sql 
Gateway](https://cwiki.apache.org/confluence/display/FLINK/FLIP-293%3A+Introduce+Flink+Jdbc+Driver+For+Sql+Gateway)
+
+
+### Stored Procedures
+
+Stored Procedures provide a convenient way to encapsulate complex logic to 
perform data manipulation or administrative 
+tasks in Apache Flink itself. Therefore, Flink introduces the support for 
calling stored procedures. 
+Flink now allows catalog developers to develop their own built-in stored 
procedures and then enables users to call these
+predefined stored procedures.
+
+**More Information**
+* 
[Documentation](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/procedures/)
+* [FLIP-311: Support Call Stored 
Procedure](https://cwiki.apache.org/confluence/display/FLINK/FLIP-311%3A+Support+Call+Stored+Procedure)
+
+### Extended DDL Support
+
+From this release onwards, Flink supports
+
+- `REPLACE TABLE AS SELECT`
+- `CREATE OR REPLACE TABLE AS SELECT`
+
+and both these commands and previously supported `CREATE TABLE AS` can now 
support atomicity provided the underlying 
+connector supports this.
+
+Moreover, Apache Flink now supports TRUNCATE TABLE in batch execution mode. As 
before, the underlying connector needs 
+to implement and provide this capability
+
+And, finally, we have also added support for adding, dropping and listing 
partitions via
+
+- `ALTER TABLE ADD PARTITION`
+- `ALTER TABLE DROP PARTITION`
+- `SHOW PARTITIONS`
+
+**More Information**
+- [Documentation on 
TRUNCATE](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/truncate/)
+- [Documentation on CREATE OR 
REPLACE](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/create/#create-or-replace-table)
+- [Documentation on ALTER 
TABLE](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/alter/#alter-table)
+- [FLIP-302: Support TRUNCATE TABLE statement in batch 
mode](https://cwiki.apache.org/confluence/display/FLINK/FLIP-302%3A+Support+TRUNCATE+TABLE+statement+in+batch+mode)
+- [FLIP-303: Support REPLACE TABLE AS SELECT 
statement](https://cwiki.apache.org/confluence/display/FLINK/FLIP-303%3A+Support+REPLACE+TABLE+AS+SELECT+statement)
+- [FLIP-305: Support atomic for CREATE TABLE AS SELECT(CTAS) 
statement](https://cwiki.apache.org/confluence/display/FLINK/

[PR] Bump com.rabbitmq:amqp-client from 5.13.1 to 5.18.0 in /flink-connector-rabbitmq [flink-connector-rabbitmq]

2023-10-23 Thread via GitHub


dependabot[bot] opened a new pull request, #18:
URL: https://github.com/apache/flink-connector-rabbitmq/pull/18

   Bumps 
[com.rabbitmq:amqp-client](https://github.com/rabbitmq/rabbitmq-java-client) 
from 5.13.1 to 5.18.0.
   
   Release notes
   Sourced from https://github.com/rabbitmq/rabbitmq-java-client/releases";>com.rabbitmq:amqp-client's
 releases.
   
   v5.18.0
   Changes between 5.17.0 and 5.18.0
   This is a minor release with usability improvements and dependency 
upgrades. It is compatible with 5.17.x. All users of the 5.x.x series are 
encouraged to upgrade.
   Inbound message size is now enforced, with default limit being 64 MiB.
   Thanks to https://github.com/JHahnHRO";>@​JHahnHRO and Sérgio Faria 
(https://github.com/sergio91pt";>@​sergio91pt) for 
their contribution.
   Add ability to specify maximum message size
   GitHub issue: https://redirect.github.com/rabbitmq/rabbitmq-java-client/issues/1062";>#1062
   Do not confirmSelect more than once per channel
   GitHub PR: https://redirect.github.com/rabbitmq/rabbitmq-java-client/issues/1057";>#1057
   Make RpcClient (Auto)Closeable
   GitHub issue: https://redirect.github.com/rabbitmq/rabbitmq-java-client/issues/1032";>#1032
   Bump dependencies
   GitHub issue: https://redirect.github.com/rabbitmq/rabbitmq-java-client/issues/999";>#999
   Dependency
   Maven
   
 com.rabbitmq
 amqp-client
 5.18.0
   
   
   Gradle
   compile 'com.rabbitmq:amqp-client:5.18.0'
   
   v5.17.1
   Changes between 5.17.0 and 5.17.1
   This is a minor release with a usability improvement. It is compatible 
with 5.17.0.
   Inbound message size is now enforced, with the default limit being 64 
MiB.
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/rabbitmq/rabbitmq-java-client/commit/dcc284ee1b199057a1094055b7eac597539c9942";>dcc284e
 [maven-release-plugin] prepare release v5.18.0
   https://github.com/rabbitmq/rabbitmq-java-client/commit/75d1d1eb2d365f5a8f0fbc1ff5408f3dd706f4ec";>75d1d1e
 Set release version to 5.18.0
   https://github.com/rabbitmq/rabbitmq-java-client/commit/dc7952eaa41feba4c616ffbeb47d99974d16f2dc";>dc7952e
 Merge pull request https://redirect.github.com/rabbitmq/rabbitmq-java-client/issues/1064";>#1064
 from rabbitmq/dependabot/maven/5.x.x-stable/org.mock...
   https://github.com/rabbitmq/rabbitmq-java-client/commit/e2fa38f0d8c985a539bdee3a3d40d401a7141a55";>e2fa38f
 Bump mockito-core from 5.3.1 to 5.4.0
   https://github.com/rabbitmq/rabbitmq-java-client/commit/04f1801ae6eaac10af7bf802c8fb7065284624e6";>04f1801
 Tweak error message
   https://github.com/rabbitmq/rabbitmq-java-client/commit/714aae602dcae6cb4b53cadf009323ebac313cc8";>714aae6
 Add max inbound message size to ConnectionFactory
   https://github.com/rabbitmq/rabbitmq-java-client/commit/83cf551fb0142f7a5d042bd54e0cf3c1e47ed419";>83cf551
 Fix flaky test
   https://github.com/rabbitmq/rabbitmq-java-client/commit/0dc9ea2e464158685cd206e35cb52105c156a64c";>0dc9ea2
 Do not confirmSelect more than once per channel
   https://github.com/rabbitmq/rabbitmq-java-client/commit/129dc6abb0cbc36b36cdb6f3d5915f470203277f";>129dc6a
 Merge pull request https://redirect.github.com/rabbitmq/rabbitmq-java-client/issues/1060";>#1060
 from rabbitmq/dependabot/maven/5.x.x-stable/io.micro...
   https://github.com/rabbitmq/rabbitmq-java-client/commit/671efdcb1adbed4242ce0c954874eeef0d3de0ad";>671efdc
 Bump micrometer-core from 1.11.0 to 1.11.1
   Additional commits viewable in https://github.com/rabbitmq/rabbitmq-java-client/compare/v5.13.1...v5.18.0";>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.rabbitmq:amqp-client&package-manager=maven&previous-version=5.13.1&new-version=5.18.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it m

[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2023-10-23 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Description: 
{{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
*Global* Failures (with null Task name) may *only* be RootExceptions (jobs are 
considered in FAILED state when this happens and no further exceptions are 
captured) and *Local/Task* may be part of concurrent exceptions List *--* if 
this precondition is violated, an assertion is thrown as part of 
{{{}asserLocalExceptionInfo{}}}.

However, in the existing logic in the AdaptiveScheduler, we always add both the 
Global and the Local failures at the *end* of the [failure collection 
list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
 and when converting them to history entries, we *remove from the Head* the 
[oldest failure 
exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
  As a result, when there is a concurrent Task failure (first) with a Global 
failure (second terminating the job), the global failure ends up in the 
concurrent exception list, violating the precondition.

Note: DefaultScheduler does not suffer from this issue as it treats failures 
directly as HistoryEntries (no conversion step)



Solution is to only add Global failures in the *head* of the List as part of 
handleGlobalFailure method to ensure they are ending up as RootExceptionEntries.

  was:
{{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
*Global* Failures (with null Task name) may *only* be RootExceptions (jobs are 
considered in FAILED state when this happens and no further exceptions are 
captured) and *Local/Task* may be part of concurrent exceptions List *--* if 
this precondition is violated, an assertion is thrown as part of 
{{{}asserLocalExceptionInfo{}}}.

However, in the existing logic in the AdaptiveScheduler, we always both the 
Global and the Local failures at the *end* of the [failure collection 
list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
 and when converting them to history entries, we *remove from the Head* the 
[oldest failure 
exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
  As a result, when there is a concurrent Task failure (first) with a Global 
failure (second terminating the job), the global failure ends up in the 
concurrent exception list, violating the precondition.

Solution is to only add Global failures in the *head* of the List as part of 
handleGlobalFailure method to ensure they are ending up as RootExceptionEntries.


> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> {{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
> *Global* Failures (with null Task name) may *only* be RootExceptions (jobs 
> are considered in FAILED state when this happens and no further exceptions 
> are captured) and *Local/Task* may be part of concurrent exceptions List *--* 
> if this precondition is violated, an assertion is thrown as part of 
> {{{}asserLocalExceptionInfo{}}}.
> However, in the existing logic in the AdaptiveScheduler, we always add both 
> the Global and the Local failures at the *end* of the [failure collection 
> list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
>  and when converting them to history entries, we *remove from the Head* the 
> [oldest failure 
> exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
>   As a result, when there is a concurrent Task failure (first) with a Global 
> failure (second terminating the job), the global failure ends up in the 
> concurrent exception list, violating the precondition.
> No

[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2023-10-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33121:
---
Labels: pull-request-available  (was: )

> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>
> {{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
> *Global* Failures (with null Task name) may *only* be RootExceptions (jobs 
> are considered in FAILED state when this happens and no further exceptions 
> are captured) and *Local/Task* may be part of concurrent exceptions List *--* 
> if this precondition is violated, an assertion is thrown as part of 
> {{{}asserLocalExceptionInfo{}}}.
> However, in the existing logic in the AdaptiveScheduler, we always both the 
> Global and the Local failures at the *end* of the [failure collection 
> list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
>  and when converting them to history entries, we *remove from the Head* the 
> [oldest failure 
> exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
>   As a result, when there is a concurrent Task failure (first) with a Global 
> failure (second terminating the job), the global failure ends up in the 
> concurrent exception list, violating the precondition.
> Solution is to only add Global failures in the *head* of the List as part of 
> handleGlobalFailure method to ensure they are ending up as 
> RootExceptionEntries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33121] Failed precondition in JobExceptionsHandler due to concurrent global failures [flink]

2023-10-23 Thread via GitHub


pgaref commented on PR #23440:
URL: https://github.com/apache/flink/pull/23440#issuecomment-1776285236

   > I'm against making this change without a clear explanation as to when this 
case occurs. AFAICT we don't intend for it to occur, so let's find that bug 
rather than allowing it and potentially causing other downstream side-effects.
   
   Found the root cause of this, @zentol / @dmvk  can you please take another 
look? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33343) Close stale Flink PRs

2023-10-23 Thread Venkata krishnan Sowrirajan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata krishnan Sowrirajan updated FLINK-33343:

Description: 
What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR
{code:java}
1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.
{code}
We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]

  was:
What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

 
{code:java}
1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.
{code}
 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]


> Close stale Flink PRs
> -
>
> Key: FLINK-33343
> URL: https://issues.apache.org/jira/browse/FLINK-33343
> Project: Flink
>  Issue Type: Bug
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
> What is considered a stale PR? If any of the below condition is met, then the 
> PR is considered as a stale PR
> {code:java}
> 1. PRs that are not followed-up within 'X' number of days after a review
> 2. PRs that don't have a passing build and/or don't follow contribution 
> guidelines after 'X' number of days.
> 3. PRs that have merge conflicts after 'X' number of days.
> {code}
> We are yet to decide on what is 'X' yet? This can be done as part of the PR 
> and retroactively updating the same in the JIRA.
> To see the complete set of conversations on this topic, see 
> [here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2023-10-23 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Description: 
{{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
*Global* Failures (with null Task name) may *only* be RootExceptions (jobs are 
considered in FAILED state when this happens and no further exceptions are 
captured) and *Local/Task* may be part of concurrent exceptions List *--* if 
this precondition is violated, an assertion is thrown as part of 
{{{}asserLocalExceptionInfo{}}}.

However, in the existing logic in the AdaptiveScheduler, we always both the 
Global and the Local failures at the *end* of the [failure collection 
list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
 and when converting them to history entries, we *remove from the Head* the 
[oldest failure 
exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
  As a result, when there is a concurrent Task failure (first) with a Global 
failure (second terminating the job), the global failure ends up in the 
concurrent exception list, violating the precondition.

Solution is to only add Global failures in the *head* of the List as part of 
handleGlobalFailure method to ensure they are ending up as RootExceptionEntries.

  was:
{{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
*Global* Failures (with null Task name) may *only* be RootExceptions and 
*Local/Task* may be part of concurrent exceptions List *--* if this 
precondition is violated, an assertion is thrown as part of 
{{{}asserLocalExceptionInfo{}}}.


However, in the existing logic in the AdaptiveScheduler, we always both the 
Global and the Local failures at the *end* of the [failure collection 
list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
 and when converting them to history entries, we *remove from the Head* the 
[oldest failure 
exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
  As a result, when there is a concurrent Task failure (first) with a Global 
failure (second terminating the job), the global failure ends up in the 
concurrent exception list, violating the precondition.


Solution is to only add Global failures in the *head* of the List as part of 
handleGlobalFailure method to ensure they are ending up as RootExceptionEntries.


> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Priority: Major
>
> {{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
> *Global* Failures (with null Task name) may *only* be RootExceptions (jobs 
> are considered in FAILED state when this happens and no further exceptions 
> are captured) and *Local/Task* may be part of concurrent exceptions List *--* 
> if this precondition is violated, an assertion is thrown as part of 
> {{{}asserLocalExceptionInfo{}}}.
> However, in the existing logic in the AdaptiveScheduler, we always both the 
> Global and the Local failures at the *end* of the [failure collection 
> list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
>  and when converting them to history entries, we *remove from the Head* the 
> [oldest failure 
> exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
>   As a result, when there is a concurrent Task failure (first) with a Global 
> failure (second terminating the job), the global failure ends up in the 
> concurrent exception list, violating the precondition.
> Solution is to only add Global failures in the *head* of the List as part of 
> handleGlobalFailure method to ensure they are ending up as 
> RootExceptionEntries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33343) Close stale Flink PRs

2023-10-23 Thread Venkata krishnan Sowrirajan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata krishnan Sowrirajan updated FLINK-33343:

Description: 
What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

 
{code:java}
1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.
{code}
 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]

  was:
What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

 
{code:java}
1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.
 
{code}
 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]


> Close stale Flink PRs
> -
>
> Key: FLINK-33343
> URL: https://issues.apache.org/jira/browse/FLINK-33343
> Project: Flink
>  Issue Type: Bug
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
> What is considered a stale PR? If any of the below condition is met, then the 
> PR is considered as a stale PR
>  
> {code:java}
> 1. PRs that are not followed-up within 'X' number of days after a review
> 2. PRs that don't have a passing build and/or don't follow contribution 
> guidelines after 'X' number of days.
> 3. PRs that have merge conflicts after 'X' number of days.
> {code}
>  
> We are yet to decide on what is 'X' yet? This can be done as part of the PR 
> and retroactively updating the same in the JIRA.
> To see the complete set of conversations on this topic, see 
> [here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33343) Close stale Flink PRs

2023-10-23 Thread Venkata krishnan Sowrirajan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata krishnan Sowrirajan updated FLINK-33343:

Description: 
What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

 
{code:java}
1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.
 
{code}
 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]

  was:
What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.

 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]


> Close stale Flink PRs
> -
>
> Key: FLINK-33343
> URL: https://issues.apache.org/jira/browse/FLINK-33343
> Project: Flink
>  Issue Type: Bug
>Reporter: Venkata krishnan Sowrirajan
>Priority: Major
>
> What is considered a stale PR? If any of the below condition is met, then the 
> PR is considered as a stale PR
>  
> {code:java}
> 1. PRs that are not followed-up within 'X' number of days after a review
> 2. PRs that don't have a passing build and/or don't follow contribution 
> guidelines after 'X' number of days.
> 3. PRs that have merge conflicts after 'X' number of days.
>  
> {code}
>  
> We are yet to decide on what is 'X' yet? This can be done as part of the PR 
> and retroactively updating the same in the JIRA.
> To see the complete set of conversations on this topic, see 
> [here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33343) Close stale Flink PRs

2023-10-23 Thread Venkata krishnan Sowrirajan (Jira)
Venkata krishnan Sowrirajan created FLINK-33343:
---

 Summary: Close stale Flink PRs
 Key: FLINK-33343
 URL: https://issues.apache.org/jira/browse/FLINK-33343
 Project: Flink
  Issue Type: Bug
Reporter: Venkata krishnan Sowrirajan


What is considered a stale PR? If any of the below condition is met, then the 
PR is considered as a stale PR

1. PRs that are not followed-up within 'X' number of days after a review
2. PRs that don't have a passing build and/or don't follow contribution 
guidelines after 'X' number of days.
3. PRs that have merge conflicts after 'X' number of days.

 

We are yet to decide on what is 'X' yet? This can be done as part of the PR and 
retroactively updating the same in the JIRA.

To see the complete set of conversations on this topic, see 
[here|https://lists.apache.org/thread/pml95msx21sdc539404xs9tk209sdd55]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2023-10-23 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Description: 
{{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
*Global* Failures (with null Task name) may *only* be RootExceptions and 
*Local/Task* may be part of concurrent exceptions List *--* if this 
precondition is violated, an assertion is thrown as part of 
{{{}asserLocalExceptionInfo{}}}.


However, in the existing logic in the AdaptiveScheduler, we always both the 
Global and the Local failures at the *end* of the [failure collection 
list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
 and when converting them to history entries, we *remove from the Head* the 
[oldest failure 
exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
  As a result, when there is a concurrent Task failure (first) with a Global 
failure (second terminating the job), the global failure ends up in the 
concurrent exception list, violating the precondition.


Solution is to only add Global failures in the *head* of the List as part of 
handleGlobalFailure method to ensure they are ending up as RootExceptionEntries.

  was:
{{JobExceptionsHandler#createRootExceptionInfo}} *only* allows concurrent 
exceptions that are local failures *--* otherwise throws an assertion as part 
of {{{}asserLocalExceptionInfo{}}}.

However, there are rare cases where multiple concurrent global failures are 
triggered and added to the failureCollection, before transitioning the job 
state to Failed e.g., through {{StateWithExecutionGraph#handleGlobalFailure}} 
of the AdaptiveScheduler.
In this case the last added will be the root and the next one will trigger the 
assertion 


> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Priority: Major
>
> {{JobExceptionsHandler#createRootExceptionInfo}} makes the assumption that 
> *Global* Failures (with null Task name) may *only* be RootExceptions and 
> *Local/Task* may be part of concurrent exceptions List *--* if this 
> precondition is violated, an assertion is thrown as part of 
> {{{}asserLocalExceptionInfo{}}}.
> However, in the existing logic in the AdaptiveScheduler, we always both the 
> Global and the Local failures at the *end* of the [failure collection 
> list|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L338]
>  and when converting them to history entries, we *remove from the Head* the 
> [oldest failure 
> exception.|https://github.com/confluentinc/flink/blob/b8482260622c14db00f9dc88bbf9e82233613235/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java#L386]
>   As a result, when there is a concurrent Task failure (first) with a Global 
> failure (second terminating the job), the global failure ends up in the 
> concurrent exception list, violating the precondition.
> Solution is to only add Global failures in the *head* of the List as part of 
> handleGlobalFailure method to ensure they are ending up as 
> RootExceptionEntries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26088) Add Elasticsearch 8.0 support

2023-10-23 Thread Matheus Felisberto (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778872#comment-17778872
 ] 

Matheus Felisberto commented on FLINK-26088:


Thanks, [~martijnvisser]! I've found an issue with the Testcontainer I was 
running, so I just updated it with a new check start strategy. I believe it 
will work fine now.

> Add Elasticsearch 8.0 support
> -
>
> Key: FLINK-26088
> URL: https://issues.apache.org/jira/browse/FLINK-26088
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Yuhao Bi
>Assignee: Matheus Felisberto
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Since Elasticsearch 8.0 is officially released, I think it's time to consider 
> adding es8 connector support.
> The High Level REST Client we used for connection [is marked deprecated in es 
> 7.15.0|https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html].
>  Maybe we can migrate to use the new [Java API 
> Client|https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/8.0/index.html]
>  at this time.
> Elasticsearch8.0 release note: 
> [https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-notes-8.0.0.html]
> release highlights: 
> [https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-highlights.html]
> REST API compatibility: 
> https://www.elastic.co/guide/en/elasticsearch/reference/8.0/rest-api-compatibility.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33121) Failed precondition in JobExceptionsHandler due to concurrent global failures

2023-10-23 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated FLINK-33121:
---
Component/s: Runtime / Coordination

> Failed precondition in JobExceptionsHandler due to concurrent global failures
> -
>
> Key: FLINK-33121
> URL: https://issues.apache.org/jira/browse/FLINK-33121
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Panagiotis Garefalakis
>Priority: Major
>
> {{JobExceptionsHandler#createRootExceptionInfo}} *only* allows concurrent 
> exceptions that are local failures *--* otherwise throws an assertion as part 
> of {{{}asserLocalExceptionInfo{}}}.
> However, there are rare cases where multiple concurrent global failures are 
> triggered and added to the failureCollection, before transitioning the job 
> state to Failed e.g., through {{StateWithExecutionGraph#handleGlobalFailure}} 
> of the AdaptiveScheduler.
> In this case the last added will be the root and the next one will trigger 
> the assertion 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33007] Integrate autoscaler config validation into the general validator flow [flink-kubernetes-operator]

2023-10-23 Thread via GitHub


srpraneeth commented on PR #682:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/682#issuecomment-1776165715

   > Hey @srpraneeth ! Sorry for iterating too much on this :)
   > 
   > I think it would actually make sense to (instead of implementing the 
FlinkResourceValidator interface) simply embed this logic into the 
DefaultValidator.
   > 
   > We could then just keep the `validateAutoScalerFlinkConfiguration` as a 
static utility and call it from the `DefaultValidator` .
   > 
   > That way we benefit from the config handling already present in the 
DefaultValidator that properly handles / default and config overrides. We would 
also avoid these costly conversions between configs/maps etc.
   > 
   > Furthermore the config validation utility then can be moved to the 
autoscaler module where it logically belongs and other autoscaler 
imeplementaitons can also use it in the future (not only Kubernetes). I think 
this would simplify the code a lot actually and make the whole thing better.
   
   
   @gyfora Thanks for the thoroughly checking on PR. Agree this makes complete 
sense. 
   I will move the autoscaler validation to the DefaultValidator and update the 
PR. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33058][formats] Add encoding option to Avro format [flink]

2023-10-23 Thread via GitHub


afedulov commented on code in PR #23395:
URL: https://github.com/apache/flink/pull/23395#discussion_r1369334106


##
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/AvroDeserializationSchema.java:
##
@@ -53,22 +55,25 @@ public class AvroDeserializationSchema implements 
DeserializationSchema {
  * schema.
  *
  * @param schema schema of produced records
+ * @param encoding Avro serialization approach to use for decoding
  * @return deserialized record in form of {@link GenericRecord}
  */
-public static AvroDeserializationSchema forGeneric(Schema 
schema) {
-return new AvroDeserializationSchema<>(GenericRecord.class, schema);
+public static AvroDeserializationSchema forGeneric(

Review Comment:
   This API is public and all changes need to be backwards compatible. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33058) Support for JSON-encoded Avro

2023-10-23 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov reassigned FLINK-33058:
-

Assignee: Dale Lane

> Support for JSON-encoded Avro
> -
>
> Key: FLINK-33058
> URL: https://issues.apache.org/jira/browse/FLINK-33058
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Dale Lane
>Assignee: Dale Lane
>Priority: Minor
>  Labels: avro, flink, flink-formats, pull-request-available
>
> Avro supports two serialization encoding methods: binary and JSON
> cf. [https://avro.apache.org/docs/1.11.1/specification/#encodings] 
> flink-avro currently has a hard-coded assumption that Avro data is 
> binary-encoded (and cannot process Avro data that has been JSON-encoded).
> I propose adding a new optional format option to flink-avro: *avro.encoding*
> It will support two options: 'binary' and 'json'. 
> It unset, it will default to 'binary' to maintain compatibility/consistency 
> with current behaviour. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32671) Document Externalized Declarative Resource Management

2023-10-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Morávek resolved FLINK-32671.
---
Resolution: Fixed

> Document Externalized Declarative Resource Management
> -
>
> Key: FLINK-32671
> URL: https://issues.apache.org/jira/browse/FLINK-32671
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Konstantin Knauf
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32671) Document Externalized Declarative Resource Management

2023-10-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778840#comment-17778840
 ] 

David Morávek commented on FLINK-32671:
---

[~ConradJam] I've prepared a more extensive documentation of the feature, but 
it lacks Chinese translation. Would you be able to help there?

> Document Externalized Declarative Resource Management
> -
>
> Key: FLINK-32671
> URL: https://issues.apache.org/jira/browse/FLINK-32671
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Konstantin Knauf
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32671) Document Externalized Declarative Resource Management

2023-10-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778838#comment-17778838
 ] 

David Morávek commented on FLINK-32671:
---

master: 8af765b4c9cd3519193b89dae40a8f8c2439c661

release-1.18: 1d17dc71cf98b6540a506c3c9670bbd0b47052a5

> Document Externalized Declarative Resource Management
> -
>
> Key: FLINK-32671
> URL: https://issues.apache.org/jira/browse/FLINK-32671
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Konstantin Knauf
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32671) Document Externalized Declarative Resource Management

2023-10-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Morávek updated FLINK-32671:
--
Fix Version/s: 1.18.0

> Document Externalized Declarative Resource Management
> -
>
> Key: FLINK-32671
> URL: https://issues.apache.org/jira/browse/FLINK-32671
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Konstantin Knauf
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32671] Document Externalized Declararative Resource Management… [flink]

2023-10-23 Thread via GitHub


dmvk merged PR #23570:
URL: https://github.com/apache/flink/pull/23570


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32671] Document Externalized Declararative Resource Management… [flink]

2023-10-23 Thread via GitHub


dmvk merged PR #23573:
URL: https://github.com/apache/flink/pull/23573


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33309] Add `-Djava.security.manager=allow` [flink]

2023-10-23 Thread via GitHub


snuyanzin commented on PR #23547:
URL: https://github.com/apache/flink/pull/23547#issuecomment-1776061785

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Draft: FLINK-28229 (CI) [flink]

2023-10-23 Thread via GitHub


afedulov commented on PR #23558:
URL: https://github.com/apache/flink/pull/23558#issuecomment-1776059765

   Only 
[this](https://github.com/apache/flink/pull/23558/commits/23eb7bf8e3d3054e53410ecd0a787d97e5bd50c3)
 commit is relevant, the rest is from [the fromElements 
migration](https://github.com/apache/flink/pull/23553).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32896][Runtime/Coordination] Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key as initial capacity [flink]

2023-10-23 Thread via GitHub


tzy-0x7cf commented on PR #23518:
URL: https://github.com/apache/flink/pull/23518#issuecomment-1776030176

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32896][Runtime/Coordination] Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key as initial capacity [flink]

2023-10-23 Thread via GitHub


tzy-0x7cf commented on PR #23518:
URL: https://github.com/apache/flink/pull/23518#issuecomment-1776027042

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32896][Runtime/Coordination] Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key as initial capacity [flink]

2023-10-23 Thread via GitHub


tzy-0x7cf commented on PR #23518:
URL: https://github.com/apache/flink/pull/23518#issuecomment-1776024606

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32896][Runtime/Coordination] Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key as initial capacity [flink]

2023-10-23 Thread via GitHub


tzy-0x7cf commented on PR #23518:
URL: https://github.com/apache/flink/pull/23518#issuecomment-1776007749

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25857] Add committer metrics to track the status of committables [flink]

2023-10-23 Thread via GitHub


tzulitai commented on code in PR #23555:
URL: https://github.com/apache/flink/pull/23555#discussion_r1369177945


##
flink-core/src/main/java/org/apache/flink/api/connector/sink2/TwoPhaseCommittingSink.java:
##
@@ -77,4 +97,42 @@ interface PrecommittingSinkWriter extends 
SinkWriter {
  */
 Collection prepareCommit() throws IOException, 
InterruptedException;
 }
+
+/** The interface exposes some runtime info for creating a {@link 
Committer}. */
+@PublicEvolving
+interface CommitterInitContext {

Review Comment:
   I'm wondering if it makes sense to have this and the existing sink writer's 
`InitContext` extend from a common interface, as it seems all methods except 
from the metric group retrieval is shared. Having that would make sure that 
these shared methods don't diverge across the `InitContext`s in the future, 
which can be confusing for implementors given how tightly coupled the committer 
and sink writer is.



##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommitRequestImpl.java:
##
@@ -32,16 +33,27 @@ public class CommitRequestImpl implements 
Committer.CommitRequest
 private CommT committable;
 private int numRetries;
 private CommitRequestState state;
+private SinkCommitterMetricGroup metricGroup;
 
-protected CommitRequestImpl(CommT committable) {
+protected CommitRequestImpl(CommT committable, SinkCommitterMetricGroup 
metricGroup) {
 this.committable = committable;
+this.metricGroup = metricGroup;
 state = CommitRequestState.RECEIVED;
+
+// Currently only the SubtaskCommittableManager uses this constructor 
to create a new
+// CommitRequestImpl, so we can increment the metrics here
+metricGroup.getNumCommittablesTotalCounter().inc();

Review Comment:
   this feels a bit hacky, as your comment already hints. My main issue with 
this is that this happening in the constructor, if we only look at this class 
locally it's hard to tell if we're incrementing it correctly. For example, a 
deserializer for `CommitRequestImpl`s can totally call this constructor and 
unintentionally increment the metric.
   
   I see that you've already wired in the `SinkCommitterMetricGroup` to the 
`SubtaskCommittableManager`. Can we just increment the total # of committables 
there in the `add` method then?



##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/SubtaskCommittableManager.java:
##
@@ -72,7 +86,7 @@ void add(CommittableWithLineage committable) {
 
 void add(CommT committable) {
 checkState(requests.size() < numExpectedCommittables, "Already 
received all committables.");
-requests.add(new CommitRequestImpl<>(committable));
+requests.add(new CommitRequestImpl<>(committable, metricGroup));

Review Comment:
   Increment total # of committables here instead of within `CommitRequestImpl` 
constructor.



##
flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkV2MetricsITCase.java:
##
@@ -116,6 +125,58 @@ public void testMetrics() throws Exception {
 jobClient.getJobExecutionResult().get();
 }
 
+@Test
+public void testCommitterMetrics() throws Exception {

Review Comment:
   👍 



##
flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkV2MetricsITCase.java:
##
@@ -116,6 +125,58 @@ public void testMetrics() throws Exception {
 jobClient.getJobExecutionResult().get();
 }
 
+@Test
+public void testCommitterMetrics() throws Exception {

Review Comment:
   👍 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32380] Support Java Records with PojoTypeInfo/Serializer [flink]

2023-10-23 Thread via GitHub


gyfora commented on PR #23490:
URL: https://github.com/apache/flink/pull/23490#issuecomment-1775977617

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32671] Document Externalized Declararative Resource Management… [flink]

2023-10-23 Thread via GitHub


flinkbot commented on PR #23573:
URL: https://github.com/apache/flink/pull/23573#issuecomment-1775918429

   
   ## CI report:
   
   * 2501511ca4eb9db37fdba0eb8c0a40886b9ce0f0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-32671] Document Externalized Declararative Resource Management… [flink]

2023-10-23 Thread via GitHub


dmvk opened a new pull request, #23573:
URL: https://github.com/apache/flink/pull/23573

   BP of https://github.com/apache/flink/pull/23570


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28303) Kafka SQL Connector loses data when restoring from a savepoint with a topic with empty partitions

2023-10-23 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-28303:

Affects Version/s: 1.17.1
   1.16.2
   kafka-3.0.0
   1.15.4

> Kafka SQL Connector loses data when restoring from a savepoint with a topic 
> with empty partitions
> -
>
> Key: FLINK-28303
> URL: https://issues.apache.org/jira/browse/FLINK-28303
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.4, 1.15.4, kafka-3.0.0, 1.16.2, 1.17.1
>Reporter: Robert Metzger
>Assignee: tanjialiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kafka-3.0.1, kafka-3.1.0
>
>
> Steps to reproduce:
> - Set up a Kafka topic with 10 partitions
> - produce records 0-9 into the topic
> - take a savepoint and stop the job
> - produce records 10-19 into the topic
> - restore the job from the savepoint.
> The job will be missing usually 2-4 records from 10-19.
> My assumption is that if a partition never had data (which is likely with 10 
> partitions and 10 records), the savepoint will only contain offsets for 
> partitions with data. 
> While the job was offline (and we've written record 10-19 into the topic), 
> all partitions got filled. Now, when Kafka comes online again, it will use 
> the "latest" offset for those partitions, skipping some data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28303) Kafka SQL Connector loses data when restoring from a savepoint with a topic with empty partitions

2023-10-23 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-28303.
---
Resolution: Fixed

> Kafka SQL Connector loses data when restoring from a savepoint with a topic 
> with empty partitions
> -
>
> Key: FLINK-28303
> URL: https://issues.apache.org/jira/browse/FLINK-28303
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.4
>Reporter: Robert Metzger
>Assignee: tanjialiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kafka-3.0.1, kafka-3.1.0
>
>
> Steps to reproduce:
> - Set up a Kafka topic with 10 partitions
> - produce records 0-9 into the topic
> - take a savepoint and stop the job
> - produce records 10-19 into the topic
> - restore the job from the savepoint.
> The job will be missing usually 2-4 records from 10-19.
> My assumption is that if a partition never had data (which is likely with 10 
> partitions and 10 records), the savepoint will only contain offsets for 
> partitions with data. 
> While the job was offline (and we've written record 10-19 into the topic), 
> all partitions got filled. Now, when Kafka comes online again, it will use 
> the "latest" offset for those partitions, skipping some data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28303) Kafka SQL Connector loses data when restoring from a savepoint with a topic with empty partitions

2023-10-23 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778783#comment-17778783
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-28303:
-

Merged.

apache/flink-connector-kafka:main - 54e3b70deb349538edba1ec2b051ed9d9f79b563
apache/flink-connector-kafka:v3.0 538e9c10463dbdf0942c8858678e98bf3522d566

> Kafka SQL Connector loses data when restoring from a savepoint with a topic 
> with empty partitions
> -
>
> Key: FLINK-28303
> URL: https://issues.apache.org/jira/browse/FLINK-28303
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.4
>Reporter: Robert Metzger
>Assignee: tanjialiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kafka-3.0.1, kafka-3.1.0
>
>
> Steps to reproduce:
> - Set up a Kafka topic with 10 partitions
> - produce records 0-9 into the topic
> - take a savepoint and stop the job
> - produce records 10-19 into the topic
> - restore the job from the savepoint.
> The job will be missing usually 2-4 records from 10-19.
> My assumption is that if a partition never had data (which is likely with 10 
> partitions and 10 records), the savepoint will only contain offsets for 
> partitions with data. 
> While the job was offline (and we've written record 10-19 into the topic), 
> all partitions got filled. Now, when Kafka comes online again, it will use 
> the "latest" offset for those partitions, skipping some data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33090][checkpointing] CheckpointsCleaner clean individual chec… [flink]

2023-10-23 Thread via GitHub


yigress commented on PR #23425:
URL: https://github.com/apache/flink/pull/23425#issuecomment-1775778599

   @pnowojski the command flinkbot run azure didn't kick off new runs. I can't 
start a re-run either.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369091466


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java:
##
@@ -163,6 +163,9 @@
  *
  * Lines 5436 ~ 5442, Flink enables TIMESTAMP and TIMESTAMP_LTZ for first 
orderBy column in
  * matchRecognize at {@link SqlValidatorImpl#validateMatchRecognize}.
+ *
+ * Lines 1954 ~ 1977, Flink improves error message for functions without 
appropriate arguments in
+ * handleUnresolvedFunction at {@link 
SqlValidatorImpl#handleUnresolvedFunction}.

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369088472


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java:
##
@@ -163,6 +163,9 @@
  *
  * Lines 5436 ~ 5442, Flink enables TIMESTAMP and TIMESTAMP_LTZ for first 
orderBy column in
  * matchRecognize at {@link SqlValidatorImpl#validateMatchRecognize}.
+ *
+ * Lines 1954 ~ 1977, Flink improves error message for functions without 
appropriate arguments in
+ * handleUnresolvedFunction at {@link 
SqlValidatorImpl#handleUnresolvedFunction}.

Review Comment:
   Sounds good. Will do that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


snuyanzin commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369081257


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java:
##
@@ -163,6 +163,9 @@
  *
  * Lines 5436 ~ 5442, Flink enables TIMESTAMP and TIMESTAMP_LTZ for first 
orderBy column in
  * matchRecognize at {@link SqlValidatorImpl#validateMatchRecognize}.
+ *
+ * Lines 1954 ~ 1977, Flink improves error message for functions without 
appropriate arguments in
+ * handleUnresolvedFunction at {@link 
SqlValidatorImpl#handleUnresolvedFunction}.

Review Comment:
   a couple of comments
   1. it's better to keep these records ordered: first with lower line number
   2. since you've changed something before other modifications i'm pretty sure 
line numbers for others are changed => should be updated as well



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369078717


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java:
##
@@ -1948,9 +1951,23 @@ public CalciteException handleUnresolvedFunction(
 if (unresolvedFunction instanceof SqlFunction) {
 final SqlOperandTypeChecker typeChecking =
 new AssignableOperandTypeChecker(argTypes, argNames);
-signature =
+final String invocation =
 typeChecking.getAllowedSignatures(
 unresolvedFunction, unresolvedFunction.getName());
+if (unresolvedFunction.getOperandTypeChecker() != null) {
+final String allowedSignatures =
+unresolvedFunction
+.getOperandTypeChecker()
+.getAllowedSignatures(
+unresolvedFunction, 
unresolvedFunction.getName());
+throw newValidationError(
+call,
+EXTRA_RESOURCE.validatorNoFunctionMatch(invocation, 
allowedSignatures));
+} else {
+signature =
+typeChecking.getAllowedSignatures(
+unresolvedFunction, 
unresolvedFunction.getName());
+}

Review Comment:
   Also added comments similar to other modifications made. Once, CALCITE-6069 
is implemented, we can remove this modification.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369078717


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java:
##
@@ -1948,9 +1951,23 @@ public CalciteException handleUnresolvedFunction(
 if (unresolvedFunction instanceof SqlFunction) {
 final SqlOperandTypeChecker typeChecking =
 new AssignableOperandTypeChecker(argTypes, argNames);
-signature =
+final String invocation =
 typeChecking.getAllowedSignatures(
 unresolvedFunction, unresolvedFunction.getName());
+if (unresolvedFunction.getOperandTypeChecker() != null) {
+final String allowedSignatures =
+unresolvedFunction
+.getOperandTypeChecker()
+.getAllowedSignatures(
+unresolvedFunction, 
unresolvedFunction.getName());
+throw newValidationError(
+call,
+EXTRA_RESOURCE.validatorNoFunctionMatch(invocation, 
allowedSignatures));
+} else {
+signature =
+typeChecking.getAllowedSignatures(
+unresolvedFunction, 
unresolvedFunction.getName());
+}

Review Comment:
   Also added comments like other modifications made. Once its implemented in 
Calcite, we can remove this modification.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369077943


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/ExtraCalciteResource.java:
##
@@ -0,0 +1,11 @@
+package org.apache.calcite.sql.validate;
+
+import org.apache.calcite.runtime.Resources;
+
+public interface ExtraCalciteResource {

Review Comment:
   Added



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369077661


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/TypeInferenceUtil.java:
##
@@ -209,6 +209,45 @@ public static TableException createUnexpectedException(
 cause);
 }
 
+/**
+ * @param argumentCount expected argument count

Review Comment:
   Added



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-20955) Refactor HBase Source in accordance with FLIP-27

2023-10-23 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778779#comment-17778779
 ] 

Ferenc Csaky commented on FLINK-20955:
--

Hey, thanks for look into this! I did not know this in progress ticket, I 
thought about doing this when the externalized connector is released, so sure I 
can commit to review, even help out if you think you need a hand, so feel free 
to ping me about it.

> Refactor HBase Source in accordance with FLIP-27
> 
>
> Key: FLINK-20955
> URL: https://issues.apache.org/jira/browse/FLINK-20955
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: Moritz Manner
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>
> The HBase connector source implementation should be updated in accordance 
> with [FLIP-27: Refactor Source 
> Interface|https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface].
> One source should map to one table in HBase. Users can specify which 
> column[families] to watch; each change in one of the columns triggers a 
> record with change type, table, column family, column, value, and timestamp.
> h3. Idea
> The new Flink HBase Source makes use of the internal [replication mechanism 
> of HBase|https://hbase.apache.org/book.html#_cluster_replication]. The Source 
> is registering at the HBase cluster and will receive all WAL edits written in 
> HBase. From those WAL edits the Source can create the DataStream. 
> h3. Split
> We're still not 100% sure which information a Split should contain. We have 
> the following possibilities: 
>  # There is only one Split per Source and the Split contains all the 
> necessary information to connect with HBase. The SourceReader which processes 
> the Split will receive all WAL edits for all tables and filters the relevant 
> edits. 
>  # There are multiple Splits per Source, each Split representing one HBase 
> Region to read from. This assumes that it is possible to only receive WAL 
> edits from a specific HBase Region and not receive all WAL edits. This would 
> be preferable as it allows parallel processing of multiple regions, but we 
> still need to figure out how this is possible.
> In both cases the Split will contain information about the HBase instance and 
> table. 
> h3. Split Enumerator
> Depending on which Split we'll decide on, the split enumerator will connect 
> to HBase and get all relevant regions or just create one Split.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33260] Allow user to provide a list of recoverable exceptions [flink-connector-aws]

2023-10-23 Thread via GitHub


iemre commented on code in PR #110:
URL: 
https://github.com/apache/flink-connector-aws/pull/110#discussion_r1367748921


##
flink-connector-aws/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/publisher/fanout/FanOutRecordPublisher.java:
##
@@ -289,21 +293,19 @@ private 
software.amazon.awssdk.services.kinesis.model.StartingPosition toSdkV2St
 Object marker = startingPosition.getStartingMarker();
 
 switch (startingPosition.getShardIteratorType()) {
-case AT_TIMESTAMP:
-{
-Preconditions.checkNotNull(
-marker, "StartingPosition AT_TIMESTAMP date marker 
is null.");
-builder.timestamp(((Date) marker).toInstant());
-break;
-}
+case AT_TIMESTAMP: {
+Preconditions.checkNotNull(
+marker, "StartingPosition AT_TIMESTAMP date marker is 
null.");
+builder.timestamp(((Date) marker).toInstant());
+break;

Review Comment:
   ignore accidental reformatting



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33260] Allow user to provide a list of recoverable exceptions [flink-connector-aws]

2023-10-23 Thread via GitHub


iemre commented on code in PR #110:
URL: 
https://github.com/apache/flink-connector-aws/pull/110#discussion_r1367748910


##
flink-connector-aws/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/publisher/fanout/FanOutRecordPublisher.java:
##
@@ -289,21 +293,19 @@ private 
software.amazon.awssdk.services.kinesis.model.StartingPosition toSdkV2St
 Object marker = startingPosition.getStartingMarker();
 
 switch (startingPosition.getShardIteratorType()) {
-case AT_TIMESTAMP:
-{
-Preconditions.checkNotNull(
-marker, "StartingPosition AT_TIMESTAMP date marker 
is null.");
-builder.timestamp(((Date) marker).toInstant());
-break;
-}
+case AT_TIMESTAMP: {
+Preconditions.checkNotNull(
+marker, "StartingPosition AT_TIMESTAMP date marker is 
null.");
+builder.timestamp(((Date) marker).toInstant());
+break;
+}
 case AT_SEQUENCE_NUMBER:
-case AFTER_SEQUENCE_NUMBER:
-{
-Preconditions.checkNotNull(
-marker, "StartingPosition *_SEQUENCE_NUMBER 
position is null.");
-builder.sequenceNumber(marker.toString());
-break;
-}
+case AFTER_SEQUENCE_NUMBER: {
+Preconditions.checkNotNull(
+marker, "StartingPosition *_SEQUENCE_NUMBER position 
is null.");
+builder.sequenceNumber(marker.toString());
+break;
+}

Review Comment:
   ignore accidental reformatting



##
flink-connector-aws/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/publisher/fanout/FanOutRecordPublisher.java:
##
@@ -83,11 +85,11 @@ public class FanOutRecordPublisher implements 
RecordPublisher {
  * over AWS SDK V2.x
  *
  * @param startingPosition the position in the shard to start consuming 
from
- * @param consumerArn the consumer ARN of the stream consumer
- * @param subscribedShard the shard to consumer from
- * @param kinesisProxy the proxy used to talk to Kinesis services
- * @param configuration the record publisher configuration
- * @param runningSupplier a callback to query if the consumer is still 
running
+ * @param consumerArn  the consumer ARN of the stream consumer
+ * @param subscribedShard  the shard to consumer from
+ * @param kinesisProxy the proxy used to talk to Kinesis services
+ * @param configurationthe record publisher configuration
+ * @param runningSupplier  a callback to query if the consumer is still 
running

Review Comment:
   ignore accidental reformatting



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33260) Custom Error Handling for Kinesis Consumer

2023-10-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33260:
---
Labels: pull-request-available  (was: )

> Custom Error Handling for Kinesis Consumer
> --
>
> Key: FLINK-33260
> URL: https://issues.apache.org/jira/browse/FLINK-33260
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Danny Cranmer
>Assignee: Emre Kartoglu
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0
>
>
> Background
> The Kinesis Consumer exposes various configuration that allows the user to 
> define retry and backoff strategies when dealing with errors. However, the 
> configuration does not allow the user to configure which errors are 
> retryable, or different strategies for different errors. The error handling 
> logic is hard coded within the connector. Over time we discover errors that 
> should be retryable that are not, for example KDS throwing 500 on 
> SubscribeToShare or transient DNS issues. When these arise the user can 
> either fork-fix the connector or log an issue and wait for the next version.
> h3. Scope
> Add the ability for the user to define retry/backoff strategy per error. This 
> could be achieved using flexible configuration keys, or allowing the user to 
> register their own retry strategies on the connector
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-28303] Support LatestOffsetsInitializer to avoid latest-offset strategy lose data [flink-connector-kafka]

2023-10-23 Thread via GitHub


tzulitai commented on code in PR #52:
URL: 
https://github.com/apache/flink-connector-kafka/pull/52#discussion_r1369049853


##
flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/split/KafkaPartitionSplit.java:
##
@@ -35,17 +35,18 @@
 public class KafkaPartitionSplit implements SourceSplit {
 public static final long NO_STOPPING_OFFSET = Long.MIN_VALUE;
 // Indicating the split should consume from the latest.
-public static final long LATEST_OFFSET = -1;
+// @deprecated Only be used for compatibility with the history state, see 
FLINK-28303
+@Deprecated public static final long LATEST_OFFSET = -1;
 // Indicating the split should consume from the earliest.
 public static final long EARLIEST_OFFSET = -2;
 // Indicating the split should consume from the last committed offset.
 public static final long COMMITTED_OFFSET = -3;
 
 // Valid special starting offsets
 public static final Set VALID_STARTING_OFFSET_MARKERS =
-new HashSet<>(Arrays.asList(EARLIEST_OFFSET, LATEST_OFFSET, 
COMMITTED_OFFSET));
+new HashSet<>(Arrays.asList(EARLIEST_OFFSET, COMMITTED_OFFSET));
 public static final Set VALID_STOPPING_OFFSET_MARKERS =
-new HashSet<>(Arrays.asList(LATEST_OFFSET, COMMITTED_OFFSET, 
NO_STOPPING_OFFSET));
+new HashSet<>(Arrays.asList(COMMITTED_OFFSET, NO_STOPPING_OFFSET));

Review Comment:
   I'll apply a hotfix.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-23 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1369044944


##
flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java:
##
@@ -1948,9 +1951,23 @@ public CalciteException handleUnresolvedFunction(
 if (unresolvedFunction instanceof SqlFunction) {
 final SqlOperandTypeChecker typeChecking =
 new AssignableOperandTypeChecker(argTypes, argNames);
-signature =
+final String invocation =
 typeChecking.getAllowedSignatures(
 unresolvedFunction, unresolvedFunction.getName());
+if (unresolvedFunction.getOperandTypeChecker() != null) {
+final String allowedSignatures =
+unresolvedFunction
+.getOperandTypeChecker()
+.getAllowedSignatures(
+unresolvedFunction, 
unresolvedFunction.getName());
+throw newValidationError(
+call,
+EXTRA_RESOURCE.validatorNoFunctionMatch(invocation, 
allowedSignatures));
+} else {
+signature =
+typeChecking.getAllowedSignatures(
+unresolvedFunction, 
unresolvedFunction.getName());
+}

Review Comment:
   Created Calcite Ticket - https://issues.apache.org/jira/browse/CALCITE-6069



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28303] Support LatestOffsetsInitializer to avoid latest-offset strategy lose data [flink-connector-kafka]

2023-10-23 Thread via GitHub


tzulitai commented on code in PR #52:
URL: 
https://github.com/apache/flink-connector-kafka/pull/52#discussion_r1369043875


##
flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/split/KafkaPartitionSplit.java:
##
@@ -35,17 +35,18 @@
 public class KafkaPartitionSplit implements SourceSplit {
 public static final long NO_STOPPING_OFFSET = Long.MIN_VALUE;
 // Indicating the split should consume from the latest.
-public static final long LATEST_OFFSET = -1;
+// @deprecated Only be used for compatibility with the history state, see 
FLINK-28303
+@Deprecated public static final long LATEST_OFFSET = -1;
 // Indicating the split should consume from the earliest.
 public static final long EARLIEST_OFFSET = -2;
 // Indicating the split should consume from the last committed offset.
 public static final long COMMITTED_OFFSET = -3;
 
 // Valid special starting offsets
 public static final Set VALID_STARTING_OFFSET_MARKERS =
-new HashSet<>(Arrays.asList(EARLIEST_OFFSET, LATEST_OFFSET, 
COMMITTED_OFFSET));
+new HashSet<>(Arrays.asList(EARLIEST_OFFSET, COMMITTED_OFFSET));
 public static final Set VALID_STOPPING_OFFSET_MARKERS =
-new HashSet<>(Arrays.asList(LATEST_OFFSET, COMMITTED_OFFSET, 
NO_STOPPING_OFFSET));
+new HashSet<>(Arrays.asList(COMMITTED_OFFSET, NO_STOPPING_OFFSET));

Review Comment:
   These changes need to be reverted, otherwise when restoring from previous 
checkpoints prior to this change we may still store the `-1` latest marker, the 
offset validation will fail.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33090][checkpointing] CheckpointsCleaner clean individual chec… [flink]

2023-10-23 Thread via GitHub


yigress commented on PR #23425:
URL: https://github.com/apache/flink/pull/23425#issuecomment-1775637374

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32671] Document Externalized Declararative Resource Management… [flink]

2023-10-23 Thread via GitHub


zentol commented on PR #23570:
URL: https://github.com/apache/flink/pull/23570#issuecomment-1775545638

   Please remember to update the chinese docs and backport the commits to 
release-1.18


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32661][sql-gateway] Fix unstable OperationRelatedITCase.testOperationRelatedApis [flink]

2023-10-23 Thread via GitHub


Jiabao-Sun commented on PR #23564:
URL: https://github.com/apache/flink/pull/23564#issuecomment-1775527798

   Hi @XComp, could you help with it as well?
   Thanks :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32850][flink-runtime][JUnit5 Migration] The io.disk package of flink-runtime module [flink]

2023-10-23 Thread via GitHub


flinkbot commented on PR #23572:
URL: https://github.com/apache/flink/pull/23572#issuecomment-1775520889

   
   ## CI report:
   
   * 4d786334495e2e626407ac60891c96308f7eec88 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32850][flink-runtime][JUnit5 Migration] The io.disk package of flink-runtime module [flink]

2023-10-23 Thread via GitHub


Jiabao-Sun commented on PR #23572:
URL: https://github.com/apache/flink/pull/23572#issuecomment-1775518608

   Hi @XComp, @1996fanrui, @RocMarshal.
   Could you help review this PR when you have time?
   Many thanks for that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-32850][flink-runtime][JUnit5 Migration] The io.disk package of flink-runtime module [flink]

2023-10-23 Thread via GitHub


Jiabao-Sun opened a new pull request, #23572:
URL: https://github.com/apache/flink/pull/23572

   
   
   ## What is the purpose of the change
   
   [FLINK-32850][flink-runtime][JUnit5 Migration] The io.disk package of 
flink-runtime module
   
   ## Brief change log
   
   [FLINK-32850][flink-runtime][JUnit5 Migration] The io.disk package of 
flink-runtime module
   
   
   ## Verifying this change
   
   This change is already covered by existing tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33278) RemotePekkoRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable fails on AZP

2023-10-23 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778717#comment-17778717
 ] 

Jiabao Sun commented on FLINK-33278:


The hickup interval is nearly 10 seconds in the logs between 01:02:10.686 and 
01:02:19.074, so I suspected that the RPC timeout.
As for the specific reason, it may need further survey.

The break entrance:
 !screenshot-4.png! 

Stack trace:
{code:java}
1825 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start 
actor system, external address localhost:0, bind address 0.0.0.0:0.
3007 [flink-pekko.actor.default-dispatcher-5] INFO  
org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started
3041 [flink-pekko.actor.default-dispatcher-5] INFO  
org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - 
enabling unsafe features anyway because 
`pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled.
3042 [flink-pekko.actor.default-dispatcher-5] INFO  
org.apache.pekko.remote.Remoting [] - Starting remoting
3218 [flink-pekko.actor.default-dispatcher-5] INFO  
org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses 
:[pekko.tcp://flink@localhost:57012]
3534 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system 
started at pekko.tcp://flink@localhost:57012
3571 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start 
actor system, external address localhost:0, bind address 0.0.0.0:0.
3610 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started
3616 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - 
enabling unsafe features anyway because 
`pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled.
3617 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.Remoting [] - Starting remoting
3627 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses 
:[pekko.tcp://flink@localhost:57013]
3665 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system 
started at pekko.tcp://flink@localhost:57013
3690 [ForkJoinPool-1-worker-1] INFO  org.apache.flink.util.TestLoggerExtension 
[] - 

Test 
org.apache.flink.runtime.rpc.pekko.RemotePekkoRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable[failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable()]
 is running.

3696 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Trying to start 
actor system, external address localhost:0, bind address 0.0.0.0:0.
3726 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.event.slf4j.Slf4jLogger [] - Slf4jLogger started
3730 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.RemoteActorRefProvider [] - Pekko Cluster not in use - 
enabling unsafe features anyway because 
`pekko.remote.use-unsafe-remote-features-outside-cluster` has been enabled.
3730 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.Remoting [] - Starting remoting
3746 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.Remoting [] - Remoting started; listening on addresses 
:[pekko.tcp://flink@localhost:57014]
3789 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcServiceUtils [] - Actor system 
started at pekko.tcp://flink@localhost:57014
3831 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Starting RPC endpoint 
for 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActorTest$SerializedValueRespondingEndpoint
 at pekko://flink/user/rpc/079e1e41-10aa-4e64-95fb-1ce2a21c11b5 .
8669 [ForkJoinPool-1-worker-1] INFO  
org.apache.flink.runtime.rpc.pekko.PekkoRpcService [] - Stopping Pekko RPC 
service.
8772 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.actor.CoordinatedShutdown [] - Running CoordinatedShutdown 
with reason [ActorSystemTerminateReason]
8801 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.RemoteActorRefProvider$RemotingTerminator [] - Shutting 
down remote daemon.
8801 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.RemoteActorRefProvider$RemotingTerminator [] - Remote 
daemon shut down; proceeding with flushing remote transports.
8864 [flink-pekko.actor.default-dispatcher-6] INFO  
org.apache.pekko.remote.RemoteActorRefProvider$RemotingTerminator [] - Remoting 
shut down.
8892 [flink-pekk

[jira] [Updated] (FLINK-33278) RemotePekkoRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable fails on AZP

2023-10-23 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-33278:
---
Attachment: screenshot-4.png

> RemotePekkoRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable
>  fails on AZP
> --
>
> Key: FLINK-33278
> URL: https://issues.apache.org/jira/browse/FLINK-33278
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> This build 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53740&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=7c1d86e3-35bd-5fd5-3b7c-30c126a78702&l=6563]
> fails as
> {noformat}
> Oct 15 01:02:20 Multiple Failures (1 failure)
> Oct 15 01:02:20 -- failure 1 --
> Oct 15 01:02:20 [Any cause is instance of class 'class 
> org.apache.flink.runtime.rpc.exceptions.RecipientUnreachableException'] 
> Oct 15 01:02:20 Expecting any element of:
> Oct 15 01:02:20   [java.util.concurrent.CompletionException: 
> java.util.concurrent.TimeoutException: Invocation of 
> [RemoteRpcInvocation(SerializedValueRespondingGateway.getSerializedValue())] 
> at recipient 
> [pekko.tcp://flink@localhost:38231/user/rpc/8c211f34-41e5-4efe-93bd-8eca6c590a7f]
>  timed out. This is usually caused by: 1) Pekko failed sending the message 
> silently, due to problems like oversized payload or serialization failures. 
> In that case, you should find detailed error information in the logs. 2) The 
> recipient needs more time for responding, due to problems like slow machines 
> or network jitters. In that case, you can try to increase pekko.ask.timeout.
> Oct 15 01:02:20   at 
> java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
> Oct 15 01:02:20   at 
> java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
> Oct 15 01:02:20   at 
> org.apache.flink.runtime.rpc.pekko.RemotePekkoRpcActorTest.lambda$failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable$1(RemotePekkoRpcActorTest.java:168)
> Oct 15 01:02:20   ...(63 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33171][table planner] Consistent implicit type coercion support for equal and non-equal comparisons for codegen [flink]

2023-10-23 Thread via GitHub


LadyForest commented on code in PR #23478:
URL: https://github.com/apache/flink/pull/23478#discussion_r1368880765


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala:
##
@@ -345,44 +345,86 @@ object ScalarOperatorGens {
 }
   }
 
-  def generateEquals(
+  private def wrapExpressionIfNonEq(
+  isNonEq: Boolean,
+  equalsExpr: GeneratedExpression,
+  resultType: LogicalType): GeneratedExpression = {
+if (isNonEq) {
+  GeneratedExpression(
+s"(!${equalsExpr.resultTerm})",
+equalsExpr.nullTerm,
+equalsExpr.code,
+resultType)
+} else {
+  equalsExpr
+}
+  }
+
+  private def generateEqualAndNonEqual(
   ctx: CodeGeneratorContext,
   left: GeneratedExpression,
   right: GeneratedExpression,
+  operator: String,
   resultType: LogicalType): GeneratedExpression = {
+
 checkImplicitConversionValidity(left, right)
+
+val nonEq = operator match {
+  case "==" => false
+  case "!=" => true
+  case _ => throw new CodeGenException(s"Unsupported boolean comparison 
'$operator'.")
+}
 val canEqual = isInteroperable(left.resultType, right.resultType)
+
 if (isCharacterString(left.resultType) && 
isCharacterString(right.resultType)) {
-  generateOperatorIfNotNull(ctx, resultType, left, right)(
-(leftTerm, rightTerm) => s"$leftTerm.equals($rightTerm)")
+  wrapExpressionIfNonEq(

Review Comment:
   Hi @fengjiajie, I have reconsidered and realized that the equal and 
non-equal comparisons of string types cannot directly invoke the 
"wrapExpressionIfNonEq" function. For example, in TPC-DS Q19, the generated 
join operator's non-equal conditions are as follows:
   
   ```java
   // right, generated by 
   // generateOperatorIfNotNull(ctx, resultType, left, right)((leftTerm, 
rightTerm) => s"$leftTerm.equals($rightTerm)")
   
   isNull$854 = isNull$851 || false || false;
   result$855 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;
   if (!isNull$854) {
 
   
   result$855 = 
org.apache.flink.table.data.binary.BinaryStringDataUtil.substringSQL(field$853, 
((int) 1), ((int) 5));
   
 isNull$854 = (result$855 == null);
   }
   
   
   
   
   
   isNull$858 = isNull$856 || false || false;
   result$859 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;
   if (!isNull$858) {
 
   
   result$859 = 
org.apache.flink.table.data.binary.BinaryStringDataUtil.substringSQL(field$857, 
((int) 1), ((int) 5));
   
 isNull$858 = (result$859 == null);
   }
   
   isNull$860 = isNull$854 || isNull$858;
   result$861 = false;
   if (!isNull$860) {
 
   
   result$861 = !result$855.equals(result$859);
   
 
   }
   
   return result$861;
   ```
   
   ```java
   // wrong, generated by wrapExpressionIfNonEq(...)
   
   
   isNull$854 = isNull$851 || false || false;
   result$855 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;
   if (!isNull$854) {
 
   
   result$855 = 
org.apache.flink.table.data.binary.BinaryStringDataUtil.substringSQL(field$853, 
((int) 1), ((int) 5));
   
 isNull$854 = (result$855 == null);
   }
   
   
   
   
   
   isNull$858 = isNull$856 || false || false;
   result$859 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;
   if (!isNull$858) {
 
   
   result$859 = 
org.apache.flink.table.data.binary.BinaryStringDataUtil.substringSQL(field$857, 
((int) 1), ((int) 5));
   
 isNull$858 = (result$859 == null);
   }
   
   isNull$860 = isNull$854 || isNull$858;
   result$861 = false;
   if (!isNull$860) {
 
   
   result$861 = result$855.equals(result$859);
   
 
   }
   
   return (!result$861); // this is wrong
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368834849


##
.github/workflows/ci.yml:
##
@@ -41,8 +41,8 @@ on:
 required: false
 type: number
 default: 50
-  run_dependency_convergence:

Review Comment:
   Yes I totally agree. As I said in [this 
comment](https://github.com/apache/flink-connector-shared-utils/pull/23#issuecomment-1775394910),
 I merged the 2 test parameters because I misunderstood a comment you did on 
slack. If we put them separated again, no more breaking change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368847336


##
.github/workflows/_testing.yml:
##
@@ -25,19 +25,25 @@ jobs:
   specific-version:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16.1
+  flink_version: 1.17.1
   connector_branch: ci_utils
-  snapshot-version:
+  snapshot-without-sanity-checks:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16-SNAPSHOT
-  connector_branch: ci_utils
-  disable-convergence:
+  flink_version: 1.19-SNAPSHOT
+  run_sanity_checks: false
+  non-main-version_without-sanity-checks:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16.1
-  connector_branch: ci_utils

Review Comment:
   just for testing as the workflow is called "testing" I thought I could use 
it to test my setup :smile: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368847336


##
.github/workflows/_testing.yml:
##
@@ -25,19 +25,25 @@ jobs:
   specific-version:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16.1
+  flink_version: 1.17.1
   connector_branch: ci_utils
-  snapshot-version:
+  snapshot-without-sanity-checks:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16-SNAPSHOT
-  connector_branch: ci_utils
-  disable-convergence:
+  flink_version: 1.19-SNAPSHOT
+  run_sanity_checks: false
+  non-main-version_without-sanity-checks:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16.1
-  connector_branch: ci_utils

Review Comment:
   just for testing as the workflow is called "testing" I though I could use it 
to test my setup :smile: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368845647


##
.github/workflows/ci.yml:
##
@@ -41,8 +41,8 @@ on:
 required: false
 type: number
 default: 50
-  run_dependency_convergence:
-description: "Whether to run the dependency convergence check"
+  run_sanity_checks:

Review Comment:
   ok. This will be solved when separating the test parameters (see other 
comments)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33171][table planner] Consistent implicit type coercion support for equal and non-equal comparisons for codegen [flink]

2023-10-23 Thread via GitHub


fengjiajie commented on PR #23478:
URL: https://github.com/apache/flink/pull/23478#issuecomment-1775428180

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33278) RemotePekkoRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable fails on AZP

2023-10-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778708#comment-17778708
 ] 

Matthias Pohl commented on FLINK-33278:
---

thanks for looking into this, [~jiabao.sun]. I am not able to follow what 
you're doing. Stopping the code execution at the lines that you suggest in your 
screenshots doesn't make the test fail for me. Generally speaking, if you stop 
the execution at the "right" place in the code it becomes quite likely that you 
generate a timeout. 

That's also what we most likely have observed in the logs where there the 
machine didn't continue processing for some time (based on the logged 
timestamps).

> RemotePekkoRpcActorTest.failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable
>  fails on AZP
> --
>
> Key: FLINK-33278
> URL: https://issues.apache.org/jira/browse/FLINK-33278
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> This build 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53740&view=logs&j=0e7be18f-84f2-53f0-a32d-4a5e4a174679&t=7c1d86e3-35bd-5fd5-3b7c-30c126a78702&l=6563]
> fails as
> {noformat}
> Oct 15 01:02:20 Multiple Failures (1 failure)
> Oct 15 01:02:20 -- failure 1 --
> Oct 15 01:02:20 [Any cause is instance of class 'class 
> org.apache.flink.runtime.rpc.exceptions.RecipientUnreachableException'] 
> Oct 15 01:02:20 Expecting any element of:
> Oct 15 01:02:20   [java.util.concurrent.CompletionException: 
> java.util.concurrent.TimeoutException: Invocation of 
> [RemoteRpcInvocation(SerializedValueRespondingGateway.getSerializedValue())] 
> at recipient 
> [pekko.tcp://flink@localhost:38231/user/rpc/8c211f34-41e5-4efe-93bd-8eca6c590a7f]
>  timed out. This is usually caused by: 1) Pekko failed sending the message 
> silently, due to problems like oversized payload or serialization failures. 
> In that case, you should find detailed error information in the logs. 2) The 
> recipient needs more time for responding, due to problems like slow machines 
> or network jitters. In that case, you can try to increase pekko.ask.timeout.
> Oct 15 01:02:20   at 
> java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
> Oct 15 01:02:20   at 
> java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
> Oct 15 01:02:20   at 
> org.apache.flink.runtime.rpc.pekko.RemotePekkoRpcActorTest.lambda$failsRpcResultImmediatelyIfRemoteRpcServiceIsNotAvailable$1(RemotePekkoRpcActorTest.java:168)
> Oct 15 01:02:20   ...(63 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368834849


##
.github/workflows/ci.yml:
##
@@ -41,8 +41,8 @@ on:
 required: false
 type: number
 default: 50
-  run_dependency_convergence:

Review Comment:
   Yes I totally agree. As I said in [this 
comment](https://github.com/apache/flink-connector-shared-utils/pull/23#issuecomment-1775394910),
 I merged the 2 test parameters because I misunderstood a comment you did on 
slack. If we put them separated again, no more braking change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368834849


##
.github/workflows/ci.yml:
##
@@ -41,8 +41,8 @@ on:
 required: false
 type: number
 default: 50
-  run_dependency_convergence:

Review Comment:
   Yes I totally agree. As I said in [this 
comment](https://github.com/apache/flink-connector-shared-utils/pull/23#issuecomment-1775394910)
 I merged the 2 test parameters because I misunderstood a comment you did on 
slack. If we put them separated again, no more braking change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


echauchot commented on PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#issuecomment-1775394910

   > Generally speaking I'm not too fond of a general "turn some things off" 
switch. I get the idea and benefits it would bring, but IMO we should be very 
explicit as to what checks have been disabled. Because of that I'd rather see a 
dedicated skip-archunit-tests parameter.
   
   This is what I wanted to do at first but as you said in a slack discussion 
about archunit tests
   > We could actually roll this into the run_dependency_convergence option 
because it has fundamentally the same cause.
   
   I merged the two. But it seems I misunderstood what you meant. I actually 
also prefer to keep them separated. So, no problem, I'll make them separated
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33301) Add Java and Maven version checks in the bash script of Flink release process

2023-10-23 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778692#comment-17778692
 ] 

Matthias Pohl commented on FLINK-33301:
---

Aren't those the ones that were created and deployed with 
{{tools/releasing/deploy_staging_jars.sh}}?

> Add Java and Maven version checks in the bash script of Flink release process
> -
>
> Key: FLINK-33301
> URL: https://issues.apache.org/jira/browse/FLINK-33301
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Minor
>  Labels: pull-request-available
>
> During the release, Flink requires specific version of Java and Maven[1]. It 
> makes sense to check those versions at the very beginning of some bash 
> scripts to let it fail fast and therefore improve the efficiency.
>  
> [1][https://lists.apache.org/thread/fbdl2w6wjmwk55y94ml91bpnhmh4rnm0|https://lists.apache.org/thread/fbdl2w6wjmwk55y94ml91bpnhmh4rnm0]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33342][ci] Enables Java17 as a target version for the Java 17 CI jobs [flink]

2023-10-23 Thread via GitHub


XComp commented on code in PR #23571:
URL: https://github.com/apache/flink/pull/23571#discussion_r1368763712


##
tools/azure-pipelines/build-apache-repo.yml:
##
@@ -146,7 +146,7 @@ stages:
 name: Default
   e2e_pool_definition:
 vmImage: 'ubuntu-20.04'
-  environment: PROFILE="-Dflink.hadoop.version=2.10.2 -Dscala-2.12 
-Djdk11 -Djdk17"
+  environment: PROFILE="-Dflink.hadoop.version=2.10.2 -Dscala-2.12 
-Djdk11 -Djdk17 -Pjava17-target"

Review Comment:
   These variables are used to filter version-specific code in the shell 
scripts.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-29492] Return Kafka producer to the pool when the Kafka sink is not the end of the chain [flink]

2023-10-23 Thread via GitHub


MartijnVisser commented on PR #21226:
URL: https://github.com/apache/flink/pull/21226#issuecomment-1775316431

   > It does not look there is agreement yet on what subsequent work is 
required for this.
   
   No, that comment reflects that the reported bug shouldn't blog the Flink 
1.15.3 release that was being worked on at that moment.
   
   In order to move forward with this issue, this should be validated on Flink 
1.17/1.18 with the latest version of the Flink Kafka connector. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-29492] Return Kafka producer to the pool when the Kafka sink is not the end of the chain [flink]

2023-10-23 Thread via GitHub


MartijnVisser commented on PR #21226:
URL: https://github.com/apache/flink/pull/21226#issuecomment-1775313525

   > given that e.g. GCP Dataproc is currently supporting 1.15.3 as the latest 
version I think it's worth considering porting that to 1.15 at least.
   
   Version 1.15 is no longer supported in the Flink community, so we can't do 
that


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-29492] Return Kafka producer to the pool when the Kafka sink is not the end of the chain [flink]

2023-10-23 Thread via GitHub


davidradl commented on PR #21226:
URL: https://github.com/apache/flink/pull/21226#issuecomment-1775301992

   @Wosin I noticed in [the 
issue](https://issues.apache.org/jira/projects/FLINK/issues/FLINK-29492?filter=allopenissues)
 it says : 
   ```
   [Fabian 
Paul](https://issues.apache.org/jira/secure/ViewProfile.jspa?name=fpaul) , 
sorry for my late response.
   
   I think this bug should not be contained in the 1.15.3 release. I changed 
some methods in the PublicEvolving class.
   ```
   
   It does not look there is agreement yet on what subsequent work is required 
for this. Could you articulate what is still required in the Jira, so we can 
start a discussion and get consensus on what we can do next please? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32563] Allow to execute sanity checks only with Flink version that connectors were built against [flink-connector-shared-utils]

2023-10-23 Thread via GitHub


zentol commented on code in PR #23:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/23#discussion_r1368732933


##
.github/workflows/_testing.yml:
##
@@ -25,19 +25,25 @@ jobs:
   specific-version:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16.1
+  flink_version: 1.17.1
   connector_branch: ci_utils
-  snapshot-version:
+  snapshot-without-sanity-checks:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16-SNAPSHOT
-  connector_branch: ci_utils
-  disable-convergence:
+  flink_version: 1.19-SNAPSHOT
+  run_sanity_checks: false
+  non-main-version_without-sanity-checks:
 uses: ./.github/workflows/ci.yml
 with:
-  flink_version: 1.16.1
-  connector_branch: ci_utils

Review Comment:
   Why is this being removed?



##
.github/workflows/ci.yml:
##
@@ -41,8 +41,8 @@ on:
 required: false
 type: number
 default: 50
-  run_dependency_convergence:

Review Comment:
   This breaks compatibility which just creates headaches. It's not too 
difficult to continue supporting this parameter.



##
.github/workflows/ci.yml:
##
@@ -41,8 +41,8 @@ on:
 required: false
 type: number
 default: 50
-  run_dependency_convergence:
-description: "Whether to run the dependency convergence check"
+  run_sanity_checks:

Review Comment:
   This name is too generic and a term we haven't used anywhere else in our CI 
setup.
   
   `skip_qa_plugins_and_archunit_tests` would be more descriptive, but you can 
also immediately tell that it is wildly inaccurate because most QA plugins are 
still being run..



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-29492] Return Kafka producer to the pool when the Kafka sink is not the end of the chain [flink]

2023-10-23 Thread via GitHub


davidradl commented on PR #21226:
URL: https://github.com/apache/flink/pull/21226#issuecomment-1775287761

   @Wosin I notice there are 2 closed commits associated with this issue. I 
assume that there is more work to be done on this (I am not sure what is needed 
from the words in this pr) and this resulted in a title change in November 
2022. There have been no updates to this pr and this has not been worked on for 
11 months.
   
   Is there someone assigned to do this fix for the remaining issue? 
   Is there a reason why this cannot be fixed in the stand alone Kafka 
connector repository? Or is the stand alone connector incompatible with 1.15.3? 
I assume backports would result in new 3rd number change and that you are 
proposing a 1.15.4 for this which would be picked up by GCP? Am I correct in 
these assumptions? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33342][ci] Enables Java17 as a target version for the Java 17 CI jobs [flink]

2023-10-23 Thread via GitHub


gyfora commented on code in PR #23571:
URL: https://github.com/apache/flink/pull/23571#discussion_r1368733119


##
tools/azure-pipelines/build-apache-repo.yml:
##
@@ -146,7 +146,7 @@ stages:
 name: Default
   e2e_pool_definition:
 vmImage: 'ubuntu-20.04'
-  environment: PROFILE="-Dflink.hadoop.version=2.10.2 -Dscala-2.12 
-Djdk11 -Djdk17"
+  environment: PROFILE="-Dflink.hadoop.version=2.10.2 -Dscala-2.12 
-Djdk11 -Djdk17 -Pjava17-target"

Review Comment:
   Why do we need both `-Djdk11` and `-Djdk17` here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >