Re: [PR] [FLINK-35342][table] Fix MaterializedTableStatementITCase test can check for wrong status [flink]

2024-05-26 Thread via GitHub


lsyldliu merged PR #24799:
URL: https://github.com/apache/flink/pull/24799


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34840) [3.1][pipeline-connectors] Add Implementation of DataSink in Iceberg.

2024-05-26 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849623#comment-17849623
 ] 

Peter Vary commented on FLINK-34840:


If you need help with Iceberg reviews, feel free to ping me.

> [3.1][pipeline-connectors] Add Implementation of DataSink in Iceberg.
> -
>
> Key: FLINK-34840
> URL: https://issues.apache.org/jira/browse/FLINK-34840
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Flink CDC Issue Import
>Priority: Major
>  Labels: github-import
>
> ### Search before asking
> - [X] I searched in the 
> [issues|https://github.com/ververica/flink-cdc-connectors/issues] and found 
> nothing similar.
> ### Motivation
> Add pipeline sink Implementation for https://github.com/apache/iceberg.
> ### Solution
> _No response_
> ### Alternatives
> _No response_
> ### Anything else?
> _No response_
> ### Are you willing to submit a PR?
> - [ ] I'm willing to submit a PR!
>  Imported from GitHub 
> Url: https://github.com/apache/flink-cdc/issues/2863
> Created by: [lvyanquan|https://github.com/lvyanquan]
> Labels: enhancement, 
> Created at: Wed Dec 13 14:37:54 CST 2023
> State: open



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-1.18][FLINK-34052][examples] Install the shaded streaming example jars in the maven repo [flink]

2024-05-26 Thread via GitHub


X-czh closed pull request #24216: [BP-1.18][FLINK-34052][examples] Install the 
shaded streaming example jars in the maven repo
URL: https://github.com/apache/flink/pull/24216


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35295) Improve jdbc connection pool initialization failure message

2024-05-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-35295.

Resolution: Fixed

master: 18b0627e343ddd218adcbccef8fb5f5f65813479

> Improve jdbc connection pool initialization failure message
> ---
>
> Key: FLINK-35295
> URL: https://issues.apache.org/jira/browse/FLINK-35295
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> As described in ticket title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35295) Improve jdbc connection pool initialization failure message

2024-05-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-35295:
--

Assignee: Xiao Huang

> Improve jdbc connection pool initialization failure message
> ---
>
> Key: FLINK-35295
> URL: https://issues.apache.org/jira/browse/FLINK-35295
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
>
> As described in ticket title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35295) Improve jdbc connection pool initialization failure message

2024-05-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-35295:
---
Fix Version/s: cdc-3.2.0

> Improve jdbc connection pool initialization failure message
> ---
>
> Key: FLINK-35295
> URL: https://issues.apache.org/jira/browse/FLINK-35295
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> As described in ticket title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35295) Improve jdbc connection pool initialization failure message

2024-05-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-35295:
---
Affects Version/s: cdc-3.1.0

> Improve jdbc connection pool initialization failure message
> ---
>
> Key: FLINK-35295
> URL: https://issues.apache.org/jira/browse/FLINK-35295
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
>
> As described in ticket title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35295][mysql] Improve jdbc connection pool initialization failure message [flink-cdc]

2024-05-26 Thread via GitHub


leonardBang merged PR #3293:
URL: https://github.com/apache/flink-cdc/pull/3293


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-35294) Use source config to check if the filter should be applied in timestamp starting mode

2024-05-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-35294.
--
Resolution: Implemented

Implemented master via : bddcaae7846671c0447b2aa6c7d8e96638988f99

> Use source config to check if the filter should be applied in timestamp 
> starting mode
> -
>
> Key: FLINK-35294
> URL: https://issues.apache.org/jira/browse/FLINK-35294
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> Since MySQL does not support the ability to quickly locate an binlog offset 
> through a timestamp, the current logic for starting from a timestamp is to 
> begin from the earliest binlog offset and then filter out the data before the 
> user-specified position.
> If the user restarts the job during the filtering process, this filter will 
> become ineffective.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35294) Use source config to check if the filter should be applied in timestamp starting mode

2024-05-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-35294:
--

Assignee: Xiao Huang

> Use source config to check if the filter should be applied in timestamp 
> starting mode
> -
>
> Key: FLINK-35294
> URL: https://issues.apache.org/jira/browse/FLINK-35294
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> Since MySQL does not support the ability to quickly locate an binlog offset 
> through a timestamp, the current logic for starting from a timestamp is to 
> begin from the earliest binlog offset and then filter out the data before the 
> user-specified position.
> If the user restarts the job during the filtering process, this filter will 
> become ineffective.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35294][mysql] Use source config to check if the filter should be applied in timestamp starting mode [flink-cdc]

2024-05-26 Thread via GitHub


leonardBang merged PR #3291:
URL: https://github.com/apache/flink-cdc/pull/3291


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-35426) Change the distribution of DynamicFilteringDataCollector to Broadcast

2024-05-26 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-35426.
---
Resolution: Done

master: 4b342da6d149113dde821b370a136beef3430fff

> Change the distribution of DynamicFilteringDataCollector to Broadcast
> -
>
> Key: FLINK-35426
> URL: https://issues.apache.org/jira/browse/FLINK-35426
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: xingbe
>Assignee: xingbe
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Currently, the DynamicFilteringDataCollector is utilized in the dynamic 
> partition pruning feature of batch jobs to collect the partition information 
> dynamically filtered by the source. Its current data distribution method is 
> rebalance, and it also acts as an upstream vertex to the probe side Source.
> Presently, when the Scheduler dynamically infers the parallelism for vertices 
> that are both downstream and Source, it considers factors from both sides, 
> which can lead to an overestimation of parallelism due to 
> DynamicFilteringDataCollector being an upstream of the Source. We aim to 
> change the distribution method of the DynamicFilteringDataCollector to 
> broadcast to prevent the dynamic overestimation of Source parallelism.
> Furthermore, given that the DynamicFilteringDataCollector transmits data 
> through the OperatorCoordinator rather than through normal data distribution, 
> this change will not affect the DPP (Dynamic Partition Pruning) functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35426][table-planner] Change the distribution of DynamicFilteringDataCollector to Broadcast [flink]

2024-05-26 Thread via GitHub


zhuzhurk merged PR #24830:
URL: https://github.com/apache/flink/pull/24830


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849612#comment-17849612
 ] 

Weijie Guo edited comment on FLINK-35456 at 5/27/24 4:04 AM:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11943

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=12577

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=12538


was (Author: weijie guo):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11943

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=12577

> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> 

[jira] [Comment Edited] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849612#comment-17849612
 ] 

Weijie Guo edited comment on FLINK-35456 at 5/27/24 4:03 AM:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11943

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=12577


was (Author: weijie guo):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11943

> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> 

[jira] [Commented] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849612#comment-17849612
 ] 

Weijie Guo commented on FLINK-35456:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59839=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11943

> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34901][connectors/jdbc]Update clause must EXCLUDED unique index. [flink-connector-jdbc]

2024-05-26 Thread via GitHub


1996fanrui commented on code in PR #108:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/108#discussion_r1615444568


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java:
##
@@ -50,12 +50,14 @@ public String getLimitClause(long limit) {
 @Override
 public Optional getUpsertStatement(
 String tableName, String[] fieldNames, String[] uniqueKeyFields) {
+Set uniqueKeyFieldsSet = 
Arrays.stream(uniqueKeyFields).collect(Collectors.toSet());
 String uniqueColumns =
-Arrays.stream(uniqueKeyFields)
+uniqueKeyFieldsSet.stream()
 .map(this::quoteIdentifier)
 .collect(Collectors.joining(", "));
 String updateClause =
 Arrays.stream(fieldNames)
+.filter(f -> !uniqueKeyFieldsSet.contains(f))

Review Comment:
   It's better to add a test to check it if it's a bug.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34901][connectors/jdbc]Update clause must EXCLUDED unique index. [flink-connector-jdbc]

2024-05-26 Thread via GitHub


1996fanrui commented on PR #108:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/108#issuecomment-2132593834

   Hi @Mrart , the CI is failed, could you rebase the main branch and check the 
CI?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34901][connectors/jdbc]Update clause must EXCLUDED unique index. [flink-connector-jdbc]

2024-05-26 Thread via GitHub


1996fanrui commented on code in PR #108:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/108#discussion_r1615442044


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractPostgresCompatibleDialect.java:
##
@@ -50,12 +50,14 @@ public String getLimitClause(long limit) {
 @Override
 public Optional getUpsertStatement(
 String tableName, String[] fieldNames, String[] uniqueKeyFields) {
+Set uniqueKeyFieldsSet = 
Arrays.stream(uniqueKeyFields).collect(Collectors.toSet());
 String uniqueColumns =
-Arrays.stream(uniqueKeyFields)
+uniqueKeyFieldsSet.stream()
 .map(this::quoteIdentifier)
 .collect(Collectors.joining(", "));
 String updateClause =
 Arrays.stream(fieldNames)
+.filter(f -> !uniqueKeyFieldsSet.contains(f))

Review Comment:
   I'm curious is this a bug or performance improvement?
   
   Also cc @RocMarshal , would you mind reviewing this PR as well (as you are 
the active contributor of jdbc connector)?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35461) Improve Runtime Configuration for Flink 2.0

2024-05-26 Thread Xuannan Su (Jira)
Xuannan Su created FLINK-35461:
--

 Summary: Improve Runtime Configuration for Flink 2.0
 Key: FLINK-35461
 URL: https://issues.apache.org/jira/browse/FLINK-35461
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Reporter: Xuannan Su


As Flink moves toward 2.0, we have revisited all runtime configurations and 
identified several improvements to enhance user-friendliness and 
maintainability. In this FLIP, we aim to refine the runtime configuration.

 

This ticket implements all the changes discussed in 
[FLIP-450|https://cwiki.apache.org/confluence/display/FLINK/FLIP-450%3A+Improve+Runtime+Configuration+for+Flink+2.0].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35457][checkpoint] Hotfix! close physical file under the protection of lock [flink]

2024-05-26 Thread via GitHub


flinkbot commented on PR #24846:
URL: https://github.com/apache/flink/pull/24846#issuecomment-2132573107

   
   ## CI report:
   
   * 411168066e0918d38167fc9e33861fc1945baa27 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35457) EventTimeWindowCheckpointingITCase fails on AZP as NPE

2024-05-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35457:
---
Labels: pull-request-available  (was: )

> EventTimeWindowCheckpointingITCase fails on AZP as NPE
> --
>
> Key: FLINK-35457
> URL: https://issues.apache.org/jira/browse/FLINK-35457
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.deleteIfNecessary(PhysicalFile.java:155)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.decRefCount(PhysicalFile.java:141)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.LogicalFile.discardWithCheckpointId(LogicalFile.java:118)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardSingleLogicalFile(FileMergingSnapshotManagerBase.java:574)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardLogicalFiles(FileMergingSnapshotManagerBase.java:588)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.notifyCheckpointAborted(FileMergingSnapshotManagerBase.java:490)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.WithinCheckpointFileMergingSnapshotManager.notifyCheckpointAborted(WithinCheckpointFileMergingSnapshotManager.java:61)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyFileMergingSnapshotManagerCheckpoint(SubtaskCheckpointCoordinatorImpl.java:505)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:490)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointAborted(SubtaskCheckpointCoordinatorImpl.java:414)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointAbortAsync$21(StreamTask.java:1513)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1536)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:367)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:352)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:998)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:923)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8538



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35457][checkpoint] Hotfix! close physical file under the protection of lock [flink]

2024-05-26 Thread via GitHub


fredia opened a new pull request, #24846:
URL: https://github.com/apache/flink/pull/24846

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   The `PhysicalFile#outputStream` may be closed by both `innerClose` and 
`deleteIfNecessary`, we need to ensure that it is only closed once.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35460) Check file size when position read for ForSt

2024-05-26 Thread Hangxiang Yu (Jira)
Hangxiang Yu created FLINK-35460:


 Summary: Check file size when position read for ForSt
 Key: FLINK-35460
 URL: https://issues.apache.org/jira/browse/FLINK-35460
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Reporter: Hangxiang Yu
Assignee: Hangxiang Yu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Enable async profiler [flink-benchmarks]

2024-05-26 Thread via GitHub


SamBarker commented on code in PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#discussion_r1615426480


##
benchmark.sh:
##
@@ -0,0 +1,54 @@
+#!/usr/bin/env bash

Review Comment:
   One final call out. This is not currently checked in with the executable bit 
set as I'm not sure what the projects preference would be for that.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Enable async profiler [flink-benchmarks]

2024-05-26 Thread via GitHub


SamBarker commented on PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#issuecomment-2132564249

   I think the `benchmark-java17` profile could potentially be simplified to 
specify a property with just the additional JVM_ARGS (possibly changing `-j` to 
be single arg inline with `-p`). 
   
   However I wanted to gauge interest in this change before I went any further 
with potential refactoring. 
   
   I also think it would be very helpful to capture the flamegraphs as part of 
the information included in `flink-speed` but I have no idea about the 
implications of that or the required setup for the build server so haven't 
perused that option as yet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35075][table] Migrate TwoStageOptimizedAggregateRule to java [flink]

2024-05-26 Thread via GitHub


liuyongvs commented on PR #24650:
URL: https://github.com/apache/flink/pull/24650#issuecomment-2132559247

   rebase to fix conflicts


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Draft: enable async profiler [flink-benchmarks]

2024-05-26 Thread via GitHub


SamBarker commented on code in PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#discussion_r1615419310


##
pom.xml:
##
@@ -290,25 +291,31 @@ under the License.


${skipTests}

test
-   
${executableJava}
+   
${basedir}/benchmark.sh

Review Comment:
   updated for consistency.



##
pom.xml:
##
@@ -536,6 +543,19 @@ under the License.



+
+   
+   
+   false
+   
+   asyncProfilerLib
+   
+   
+   enable-async-profiler
+   
+   
async:libPath=${asyncProfilerLib};output=flamegraph;dir=/tmp/profile-results

Review Comment:
   `libPath` is user and platform specific so I don't think we can sensibly 
default it.



##
benchmark.sh:
##
@@ -0,0 +1,54 @@
+#!/usr/bin/env bash
+
+JAVA_ARGS=()
+JMH_ARGS=()
+BINARY="java"
+BENCHMARK_PATTERN=
+
+while getopts ":j:c:b:e:p:a:m:h" opt; do
+  case $opt in
+j) JAVA_ARGS+=("${OPTARG}")
+;;
+c) CLASSPATH_ARG="${OPTARG}"
+;;
+b) BINARY="${OPTARG}"
+;;
+p) PROFILER_ARG="${OPTARG:+-prof ${OPTARG}}"

Review Comment:
   This is the core motivation for the change. If `-p` is specified with an non 
empty value this will set `PROFILER_ARG` to `-prof ${OPTARG}`. Without this it 
would end up adding `-prof " "` and thus JMH fails to initialise. 



##
pom.xml:
##
@@ -290,25 +291,31 @@ under the License.


${skipTests}

test
-   
${executableJava}
+   
${basedir}/benchmark.sh

-   
-Xmx6g
-   
-classpath
+   
-c

-   
org.openjdk.jmh.Main
-   

-   
-foe
-   
true
+   
-b
+   
${executableJava}
+   
-j
+   
-Xmx6g

+   
-a

Review Comment:
   gets a bit verbose. I guess the alternative would be to pass all the JMH 
args as a single string.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35459) Use Incremental Source Framework in Flink CDC TiKV Source Connector

2024-05-26 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-35459:
---

 Summary: Use Incremental Source Framework in Flink CDC TiKV Source 
Connector
 Key: FLINK-35459
 URL: https://issues.apache.org/jira/browse/FLINK-35459
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Affects Versions: cdc-3.2.0
Reporter: ouyangwulin
 Fix For: cdc-3.2.0


Use Incremental Source Framework in Flink CDC TiKV Source Connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35342) MaterializedTableStatementITCase test can check for wrong status

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849607#comment-17849607
 ] 

Weijie Guo commented on FLINK-35342:


1.20 test_cron_adaptive_scheduler table

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59828=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=12471

> MaterializedTableStatementITCase test can check for wrong status
> 
>
> Key: FLINK-35342
> URL: https://issues.apache.org/jira/browse/FLINK-35342
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Assignee: Feng Jin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> * 1.20 AdaptiveScheduler / Test (module: table) 
> https://github.com/apache/flink/actions/runs/9056197319/job/24879135605#step:10:12490
>  
> It looks like 
> {{MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume}}
>  can be flaky, where the expected status is not yet RUNNING:
> {code}
> Error: 03:24:03 03:24:03.902 [ERROR] Tests run: 6, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 26.78 s <<< FAILURE! -- in 
> org.apache.flink.table.gateway.service.MaterializedTableStatementITCase
> Error: 03:24:03 03:24:03.902 [ERROR] 
> org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume(Path,
>  RestClusterClient) -- Time elapsed: 3.850 s <<< FAILURE!
> May 13 03:24:03 org.opentest4j.AssertionFailedError: 
> May 13 03:24:03 
> May 13 03:24:03 expected: "RUNNING"
> May 13 03:24:03  but was: "CREATED"
> May 13 03:24:03   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> May 13 03:24:03   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> May 13 03:24:03   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> May 13 03:24:03   at 
> org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume(MaterializedTableStatementITCase.java:650)
> May 13 03:24:03   at java.lang.reflect.Method.invoke(Method.java:498)
> May 13 03:24:03   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> May 13 03:24:03 
> May 13 03:24:04 03:24:04.270 [INFO] 
> May 13 03:24:04 03:24:04.270 [INFO] Results:
> May 13 03:24:04 03:24:04.270 [INFO] 
> Error: 03:24:04 03:24:04.270 [ERROR] Failures: 
> Error: 03:24:04 03:24:04.271 [ERROR]   
> MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume:650
>  
> May 13 03:24:04 expected: "RUNNING"
> May 13 03:24:04  but was: "CREATED"
> May 13 03:24:04 03:24:04.271 [INFO] 
> Error: 03:24:04 03:24:04.271 [ERROR] Tests run: 82, Failures: 1, Errors: 0, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34239) Introduce a deep copy method of SerializerConfig for merging with Table configs in org.apache.flink.table.catalog.DataTypeFactoryImpl

2024-05-26 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849606#comment-17849606
 ] 

Zhanghao Chen commented on FLINK-34239:
---

[~mallikarjuna] Hi could you help rebase the code on the latest master branch? 
We can merge it after CI passes after the rebase.

> Introduce a deep copy method of SerializerConfig for merging with Table 
> configs in org.apache.flink.table.catalog.DataTypeFactoryImpl 
> --
>
> Key: FLINK-34239
> URL: https://issues.apache.org/jira/browse/FLINK-34239
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Assignee: Kumar Mallikarjuna
>Priority: Major
>  Labels: pull-request-available
>
> *Problem*
> Currently, 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig
>  will create a deep-copy of the SerializerConfig and merge Table config into 
> it. However, the deep copy is done by manully calling the getter and setter 
> methods of SerializerConfig, and is prone to human errors, e.g. missing 
> copying a newly added field in SerializerConfig.
> *Proposal*
> Introduce a deep copy method for SerializerConfig and replace the curr impl 
> in 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35456:
---
Priority: Blocker  (was: Major)

> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849605#comment-17849605
 ] 

Weijie Guo commented on FLINK-35456:


SplitAggregateITCase.testMultipleDistinctAggOnSameColumn

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59828=logs=32715a4c-21b8-59a3-4171-744e5ab107eb=ff64056b-5320-5afe-c22c-6fa339e59586=11930


> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849604#comment-17849604
 ] 

Weijie Guo commented on FLINK-35456:


testTop1WithGroupByCount:535->testTopNthWithGroupByCountBase

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59828=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12064

> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35458) Add serializer upgrade test for set serializer

2024-05-26 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-35458:
-

 Summary: Add serializer upgrade test for set serializer
 Key: FLINK-35458
 URL: https://issues.apache.org/jira/browse/FLINK-35458
 Project: Flink
  Issue Type: Sub-task
  Components: API / Type Serialization System
Affects Versions: 1.20.0
Reporter: Zhanghao Chen
 Fix For: 2.0.0


New dedicated serializer for Sets is introduced in 
[FLINK-35068|https://issues.apache.org/jira/browse/FLINK-35068]. Since 
serializer upgrade test requires at least one previous release to test the 
upgrade of set serializers (which does not exist yet), we'll add the upgrade 
test for set serializer after the release of v1.20.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35456) Many tests fails on AZP as NPE related to FileMerging

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35456:
---
Summary: Many tests fails on AZP as NPE related to FileMerging  (was: 
WindowAggregateITCase.testEventTimeHopWindow fails on AZP as NPE)

> Many tests fails on AZP as NPE related to FileMerging
> -
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35068][core][type] Introduce built-in serialization support for java.util.Set [flink]

2024-05-26 Thread via GitHub


flinkbot commented on PR #24845:
URL: https://github.com/apache/flink/pull/24845#issuecomment-2132544519

   
   ## CI report:
   
   * 00aff8482b13a036e64212cc985843cf50a13a69 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35342) MaterializedTableStatementITCase test can check for wrong status

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849602#comment-17849602
 ] 

Weijie Guo commented on FLINK-35342:


1.20 test_cron_adaptive_scheduler

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=12792

> MaterializedTableStatementITCase test can check for wrong status
> 
>
> Key: FLINK-35342
> URL: https://issues.apache.org/jira/browse/FLINK-35342
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Assignee: Feng Jin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> * 1.20 AdaptiveScheduler / Test (module: table) 
> https://github.com/apache/flink/actions/runs/9056197319/job/24879135605#step:10:12490
>  
> It looks like 
> {{MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume}}
>  can be flaky, where the expected status is not yet RUNNING:
> {code}
> Error: 03:24:03 03:24:03.902 [ERROR] Tests run: 6, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 26.78 s <<< FAILURE! -- in 
> org.apache.flink.table.gateway.service.MaterializedTableStatementITCase
> Error: 03:24:03 03:24:03.902 [ERROR] 
> org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume(Path,
>  RestClusterClient) -- Time elapsed: 3.850 s <<< FAILURE!
> May 13 03:24:03 org.opentest4j.AssertionFailedError: 
> May 13 03:24:03 
> May 13 03:24:03 expected: "RUNNING"
> May 13 03:24:03  but was: "CREATED"
> May 13 03:24:03   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> May 13 03:24:03   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> May 13 03:24:03   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> May 13 03:24:03   at 
> org.apache.flink.table.gateway.service.MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume(MaterializedTableStatementITCase.java:650)
> May 13 03:24:03   at java.lang.reflect.Method.invoke(Method.java:498)
> May 13 03:24:03   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> May 13 03:24:03   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> May 13 03:24:03 
> May 13 03:24:04 03:24:04.270 [INFO] 
> May 13 03:24:04 03:24:04.270 [INFO] Results:
> May 13 03:24:04 03:24:04.270 [INFO] 
> Error: 03:24:04 03:24:04.270 [ERROR] Failures: 
> Error: 03:24:04 03:24:04.271 [ERROR]   
> MaterializedTableStatementITCase.testAlterMaterializedTableSuspendAndResume:650
>  
> May 13 03:24:04 expected: "RUNNING"
> May 13 03:24:04  but was: "CREATED"
> May 13 03:24:04 03:24:04.271 [INFO] 
> Error: 03:24:04 03:24:04.271 [ERROR] Tests run: 82, Failures: 1, Errors: 0, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35456) WindowAggregateITCase.testEventTimeHopWindow fails on AZP as NPE

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849601#comment-17849601
 ] 

Weijie Guo commented on FLINK-35456:


WindowJoinITCase.testInnerJoinOnWTFWithOffset also has same issue:

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=ce3801ad-3bd5-5f06-d165-34d37e757d90=5e4d9387-1dcc-5885-a901-90469b7e6d2f=11935

> WindowAggregateITCase.testEventTimeHopWindow fails on AZP as NPE
> 
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> May 25 02:12:16   at 
> java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
> May 25 02:12:16   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> May 25 02:12:16   at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> May 25 02:12:16   at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> May 25 02:12:16   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> May 25 02:12:16   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> May 25 02:12:16   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
> May 25 02:12:16   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35068][core][type] Introduce built-in serialization support for java.util.Set [flink]

2024-05-26 Thread via GitHub


X-czh opened a new pull request, #24845:
URL: https://github.com/apache/flink/pull/24845

   
   
   ## What is the purpose of the change
   
   Introduce built-in serialization support for sets, which falls back to Kyro 
previously.
   
   ## Brief change log
   
   Introduce dedicated serializer and de-serializer for sets and add built-in 
serialization support for sets when extracting types.
   
   ## Verifying this change
   
   This change added tests that validate:
   - sets are correctly serialized and de-serialized with the new dedicated 
serializer and de-serializer.
   - sets are serialized using the built-in serializer instead of Kyro.
   
   Since serializer upgrade test requires at least one previous release to test 
the upgrade of set serializers (which does not exist yet), I'll postpone adding 
the upgrade test for set serializer after the release of v1.20.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (**yes** / no / don't know) 
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35068) Introduce built-in serialization support for Set

2024-05-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35068:
---
Labels: pull-request-available  (was: )

> Introduce built-in serialization support for Set
> 
>
> Key: FLINK-35068
> URL: https://issues.apache.org/jira/browse/FLINK-35068
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce built-in serialization support for {{{}Set{}}}, another common Java 
> collection type. We'll need to add a new built-in serializer for it 
> ({{{}MultiSetTypeInformation{}}} utilizes {{MapSerializer}} underneath, but 
> it could be more efficient for common {{{}Set{}}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35457) EventTimeWindowCheckpointingITCase fails on AZP as NPE

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35457:
---
Description: 

{code:java}
Caused by: java.lang.NullPointerException
at 
org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.deleteIfNecessary(PhysicalFile.java:155)
at 
org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.decRefCount(PhysicalFile.java:141)
at 
org.apache.flink.runtime.checkpoint.filemerging.LogicalFile.discardWithCheckpointId(LogicalFile.java:118)
at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardSingleLogicalFile(FileMergingSnapshotManagerBase.java:574)
at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardLogicalFiles(FileMergingSnapshotManagerBase.java:588)
at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.notifyCheckpointAborted(FileMergingSnapshotManagerBase.java:490)
at 
org.apache.flink.runtime.checkpoint.filemerging.WithinCheckpointFileMergingSnapshotManager.notifyCheckpointAborted(WithinCheckpointFileMergingSnapshotManager.java:61)
at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyFileMergingSnapshotManagerCheckpoint(SubtaskCheckpointCoordinatorImpl.java:505)
at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:490)
at 
org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointAborted(SubtaskCheckpointCoordinatorImpl.java:414)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointAbortAsync$21(StreamTask.java:1513)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1536)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:398)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:367)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:352)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:998)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:923)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.lang.Thread.run(Thread.java:748)
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8538


> EventTimeWindowCheckpointingITCase fails on AZP as NPE
> --
>
> Key: FLINK-35457
> URL: https://issues.apache.org/jira/browse/FLINK-35457
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.deleteIfNecessary(PhysicalFile.java:155)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.decRefCount(PhysicalFile.java:141)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.LogicalFile.discardWithCheckpointId(LogicalFile.java:118)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardSingleLogicalFile(FileMergingSnapshotManagerBase.java:574)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardLogicalFiles(FileMergingSnapshotManagerBase.java:588)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.notifyCheckpointAborted(FileMergingSnapshotManagerBase.java:490)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.WithinCheckpointFileMergingSnapshotManager.notifyCheckpointAborted(WithinCheckpointFileMergingSnapshotManager.java:61)
>   at 
> 

[jira] [Created] (FLINK-35457) EventTimeWindowCheckpointingITCase fails on AZP as NPE

2024-05-26 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35457:
--

 Summary: EventTimeWindowCheckpointingITCase fails on AZP as NPE
 Key: FLINK-35457
 URL: https://issues.apache.org/jira/browse/FLINK-35457
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35456) WindowAggregateITCase.testEventTimeHopWindow fails on AZP as NPE

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35456:
---
Description: 
{code:java}
May 25 02:12:16 Caused by: java.lang.NullPointerException
May 25 02:12:16 at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
May 25 02:12:16 at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
May 25 02:12:16 at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
May 25 02:12:16 at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$restoreStateHandles$5(FileMergingSnapshotManagerBase.java:683)
May 25 02:12:16 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
May 25 02:12:16 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
May 25 02:12:16 at 
java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:419)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
May 25 02:12:16 at 
java.util.ArrayList$Itr.forEachRemaining(ArrayList.java:901)
May 25 02:12:16 at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
May 25 02:12:16 at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
May 25 02:12:16 at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
May 25 02:12:16 at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
May 25 02:12:16 at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
May 25 02:12:16 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
May 25 02:12:16 at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
May 25 02:12:16 at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
May 25 02:12:16 at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
May 25 02:12:16 at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
May 25 02:12:16 at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
May 25 02:12:16 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
May 25 02:12:16 at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
May 25 02:12:16 at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
May 25 02:12:16 at 
org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.restoreStateHandles(FileMergingSnapshotManagerBase.java:680)
May 25 02:12:16 at 
org.apache.flink.runtime.checkpoint.filemerging.SubtaskFileMergingManagerRestoreOperation.restore(SubtaskFileMergingManagerRestoreOperation.java:102)
May 25 02:12:16 at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.registerRestoredStateToFileMergingManager(StreamTaskStateInitializerImpl.java:353)
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=12158


> WindowAggregateITCase.testEventTimeHopWindow fails on AZP as NPE
> 
>
> Key: FLINK-35456
> URL: https://issues.apache.org/jira/browse/FLINK-35456
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> May 25 02:12:16 Caused by: java.lang.NullPointerException
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.isManagedByFileMergingManager(FileMergingSnapshotManagerBase.java:733)
> May 25 02:12:16   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.lambda$null$4(FileMergingSnapshotManagerBase.java:687)
> May 25 02:12:16   at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
> May 25 

[jira] [Created] (FLINK-35456) WindowAggregateITCase.testEventTimeHopWindow fails on AZP as NPE

2024-05-26 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35456:
--

 Summary: WindowAggregateITCase.testEventTimeHopWindow fails on AZP 
as NPE
 Key: FLINK-35456
 URL: https://issues.apache.org/jira/browse/FLINK-35456
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35068) Introduce built-in serialization support for Set

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849599#comment-17849599
 ] 

Weijie Guo commented on FLINK-35068:


Thanks [~Zhanghao Chen], you are assigned.

> Introduce built-in serialization support for Set
> 
>
> Key: FLINK-35068
> URL: https://issues.apache.org/jira/browse/FLINK-35068
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
> Fix For: 1.20.0
>
>
> Introduce built-in serialization support for {{{}Set{}}}, another common Java 
> collection type. We'll need to add a new built-in serializer for it 
> ({{{}MultiSetTypeInformation{}}} utilizes {{MapSerializer}} underneath, but 
> it could be more efficient for common {{{}Set{}}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35068) Introduce built-in serialization support for Set

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-35068:
--

Assignee: Zhanghao Chen

> Introduce built-in serialization support for Set
> 
>
> Key: FLINK-35068
> URL: https://issues.apache.org/jira/browse/FLINK-35068
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
> Fix For: 1.20.0
>
>
> Introduce built-in serialization support for {{{}Set{}}}, another common Java 
> collection type. We'll need to add a new built-in serializer for it 
> ({{{}MultiSetTypeInformation{}}} utilizes {{MapSerializer}} underneath, but 
> it could be more efficient for common {{{}Set{}}}).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34123) Introduce built-in serialization support for Map and List

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-34123.
--
Resolution: Done

master(1.20) via f860631c523c1d446c0d01046f0fbe6055174dc6.

> Introduce built-in serialization support for Map and List
> -
>
> Key: FLINK-34123
> URL: https://issues.apache.org/jira/browse/FLINK-34123
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce built-in serialization support for Map and List, two common 
> collection types for which Flink already have custom serializers implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34123][core][type] Introduce built-in serialization support for map and lists [flink]

2024-05-26 Thread via GitHub


reswqa merged PR #24634:
URL: https://github.com/apache/flink/pull/24634


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35324) Avro format can not perform projection pushdown for specific fields

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849596#comment-17849596
 ] 

Weijie Guo commented on FLINK-35324:


Does 1.20 has the same problem?

> Avro format can not perform projection pushdown for specific fields
> ---
>
> Key: FLINK-35324
> URL: https://issues.apache.org/jira/browse/FLINK-35324
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.17.0
>Reporter: SuDewei
>Priority: Blocker
>
> AvroFormatFactory.java#createDecodingFormat would return a 
> ProjectableDecodingFormat,which means avro format deserializer could perform 
> the projection pushdown. However, it is found in practice that the Avro 
> format seems unable to perform projection pushdown for specific fields. 
> For example, there are such schema and sample data in Kafka:
> {code:java}
> -- schema
> CREATE TABLE kafka (
>`user_id` BIGINT,
>`name` STRING,
> `timestamp` TIMESTAMP(3) METADATA,
> `event_id` BIGINT,
> `payload` STRING not null
> ) WITH (
>  'connector' = 'kafka',
>  ...
> )
>  
>  -- sample data like
> (3, 'name 3', TIMESTAMP '2020-03-10 13:12:11.123', 102, 'payload 3') {code}
> The data can be successfully deserialized in this way:
> {code:java}
> Projection physicalProjections = Projection.of( new int[] {0,1,2} );
> DataType physicalFormatDataType = 
> physicalProjections.project(this.physicalDataType);
> (DeserializationSchema) ((ProjectableDecodingFormat) format)
>     .createRuntimeDecoder(context, this.physicalDataType, 
> physicalProjections.toNestedIndexes()); {code}
> The data would be:
> {code:java}
> +I(3,name 3,102) {code}
> However, when the projection index is replaced with values that do not start 
> from 0, the data cannot be successfully deserialized, for example:
> {code:java}
> Projection physicalProjections = Projection.of( new int[] {1,2} );
> DataType physicalFormatDataType = 
> physicalProjections.project(this.physicalDataType);
> (DeserializationSchema) ((ProjectableDecodingFormat) format)
> .createRuntimeDecoder(context, this.physicalDataType, 
> physicalProjections.toNestedIndexes()); {code}
> The exception would be like:
> {code:java}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: -49
> at 
> org.apache.avro.io.parsing.Symbol$Alternative.getSymbol(Symbol.java:460)
> at 
> org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:283)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:188)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161)
> at 
> org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:260)
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:248)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:180)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:154)
> at 
> org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:142)
> at 
> org.apache.flink.formats.avro.AvroRowDataDeserializationSchema.deserialize(AvroRowDataDeserializationSchema.java:103)
> ... 19 more {code}
> It seems that Avro format does not support projection pushdown for arbitrary 
> fields. Is my understanding correct?
> If this is the case, then I think Avro format should not implement the 
> ProjectableDecodingFormat interface , since it can only provide very limited 
> pushdown capabilities.
> This problem may block the connector implementing the projection pushdown 
> capability since the connector would determine whether projection pushdown 
> can be performed by judging whether the format has implemented the 
> ProjectableDecodingFormat interface or not.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35129][cdc][postgres] Add checkpoint cycle option for commit offsets [flink-cdc]

2024-05-26 Thread via GitHub


loserwang1024 commented on code in PR #3349:
URL: https://github.com/apache/flink-cdc/pull/3349#discussion_r1615396296


##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-postgres-cdc/src/test/java/org/apache/flink/cdc/connectors/postgres/source/reader/PostgresSourceReaderTest.java:
##
@@ -0,0 +1,75 @@
+package org.apache.flink.cdc.connectors.postgres.source.reader;

Review Comment:
   Please add license and javadoc comment of class, nor compile will fails



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35423) ARRAY_EXCEPT should support set semantics

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35423:
---
Affects Version/s: 1.20.0

> ARRAY_EXCEPT should support set semantics
> -
>
> Key: FLINK-35423
> URL: https://issues.apache.org/jira/browse/FLINK-35423
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available
>
> After a number of discussions e.g. here [1]
> It was decided to follow set semantics for {{ARRAY_EXCEPT}} and 
> {{ARRAY_INTERSECT}}.
> It is marked as a blocker since {{ARRAY_EXCEPT}} was added in 1.20 only and 
> has not been released yet, so the change should be done before 1.20.0 release 
> to avoid inconsistencies.
> [1] https://github.com/apache/flink/pull/24526



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31664] Implement ARRAY_INTERSECT function [flink]

2024-05-26 Thread via GitHub


liuyongvs commented on code in PR #24526:
URL: https://github.com/apache/flink/pull/24526#discussion_r1615393266


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArrayIntersectFunction.java:
##
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.runtime.util.EqualityAndHashcodeProvider;
+import org.apache.flink.table.runtime.util.ObjectContainer;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+/** Implementation of {@link BuiltInFunctionDefinitions#ARRAY_INTERSECT}. */
+@Internal
+public class ArrayIntersectFunction extends BuiltInScalarFunction {
+private final ArrayData.ElementGetter elementGetter;
+private final EqualityAndHashcodeProvider equalityAndHashcodeProvider;
+
+public ArrayIntersectFunction(SpecializedFunction.SpecializedContext 
context) {
+super(BuiltInFunctionDefinitions.ARRAY_INTERSECT, context);
+final DataType dataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType()
+.toInternal();
+elementGetter = 
ArrayData.createElementGetter(dataType.toInternal().getLogicalType());
+this.equalityAndHashcodeProvider = new 
EqualityAndHashcodeProvider(context, dataType);
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+equalityAndHashcodeProvider.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData arrayOne, ArrayData arrayTwo) {
+try {
+if (arrayOne == null || arrayTwo == null) {
+return null;
+}
+
+List list = new ArrayList<>();
+Set set = new HashSet<>();
+for (int pos = 0; pos < arrayTwo.size(); pos++) {
+final Object element = 
elementGetter.getElementOrNull(arrayTwo, pos);
+final ObjectContainer objectContainer = 
createObjectContainer(element);
+set.add(objectContainer);
+}
+for (int pos = 0; pos < arrayOne.size(); pos++) {
+final Object element = 
elementGetter.getElementOrNull(arrayOne, pos);
+final ObjectContainer objectContainer = 
createObjectContainer(element);
+if (set.contains(objectContainer)) {
+list.add(element);
+}
+}
+return new GenericArrayData(list.toArray());
+} catch (Throwable t) {
+throw new FlinkRuntimeException(t);

Review Comment:
   all the builtin function implementation does same @davidradl 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31223][sqlgateway] Introduce getFlinkConfigurationOptions to [flink]

2024-05-26 Thread via GitHub


reswqa commented on PR #24741:
URL: https://github.com/apache/flink/pull/24741#issuecomment-2132504566

   > I suggest we leave the fix as is, with the method you added, and not do a 
large amount of 1.19 -1.18 back ports. WDYT?
   
   It looks like we can't avoid a change to the public api, and if so, I 
suggest we don't include this fix in 1.18. bug-fix release(the next minor 
release of 1.18) should not make changes to the public API, and that's the rule 
we follow (the nightly CI also disapproves).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-35429) We don't need introduce getFlinkConfigurationOptions for SqlGatewayRestEndpointFactory#Context

2024-05-26 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848927#comment-17848927
 ] 

Weijie Guo edited comment on FLINK-35429 at 5/27/24 1:42 AM:
-

master(1.20) via c3221a649b9daa8374cc24ab039f6352fb2c6edf.
release-1.19 via a450980de65eaead734349ed44452f572e5e329d.


was (Author: weijie guo):
release-1.19 via a450980de65eaead734349ed44452f572e5e329d.

> We don't need introduce getFlinkConfigurationOptions for 
> SqlGatewayRestEndpointFactory#Context
> --
>
> Key: FLINK-35429
> URL: https://issues.apache.org/jira/browse/FLINK-35429
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.1
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
>
> We don't need this method, as ReadableConfig has a toMap method now.
> This will fix the compile error in 1.19
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59754=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=13638.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35429) We don't need introduce getFlinkConfigurationOptions for SqlGatewayRestEndpointFactory#Context

2024-05-26 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35429:
---
Fix Version/s: 1.20.0
   1.19.1

> We don't need introduce getFlinkConfigurationOptions for 
> SqlGatewayRestEndpointFactory#Context
> --
>
> Key: FLINK-35429
> URL: https://issues.apache.org/jira/browse/FLINK-35429
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.1
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0, 1.19.1
>
>
> We don't need this method, as ReadableConfig has a toMap method now.
> This will fix the compile error in 1.19
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59754=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=13638.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][sqlgateway] Do not introduce new method in SqlGatewayEndpointFactory#Context [flink]

2024-05-26 Thread via GitHub


reswqa merged PR #24838:
URL: https://github.com/apache/flink/pull/24838


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][sqlgateway] Do not introduce new method in SqlGatewayEndpointFactory#Context [flink]

2024-05-26 Thread via GitHub


reswqa commented on PR #24838:
URL: https://github.com/apache/flink/pull/24838#issuecomment-2132499806

   Thanks for the review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-26 Thread via GitHub


hackergin commented on PR #24844:
URL: https://github.com/apache/flink/pull/24844#issuecomment-2132497948

   @flinkbot run azure 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] enable async profiler [flink-benchmarks]

2024-05-26 Thread via GitHub


SamBarker commented on code in PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#discussion_r1615355106


##
pom.xml:
##
@@ -345,6 +346,8 @@ under the License.

-classpath


org.openjdk.jmh.Main
+   
-prof
+   
async:libPath=${asyncProfilerLib};output=flamegraph;dir=/tmp/profile-results

Review Comment:
   Argh. this fails if `asyncProfilerLib` is invalid.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] enable async profiler [flink-benchmarks]

2024-05-26 Thread via GitHub


SamBarker opened a new pull request, #90:
URL: https://github.com/apache/flink-benchmarks/pull/90

   In exploring 
[FLINK-35215](https://issues.apache.org/jira/browse/FLINK-35215) being able to 
generate flame graphs from each benchmark run makes comparing results a lot 
easier.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35446) FileMergingSnapshotManagerBase throws a NullPointerException

2024-05-26 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan resolved FLINK-35446.
-
Fix Version/s: 1.20.0
 Assignee: Zakelly Lan
   Resolution: Fixed

> FileMergingSnapshotManagerBase throws a NullPointerException
> 
>
> Key: FLINK-35446
> URL: https://issues.apache.org/jira/browse/FLINK-35446
> Project: Flink
>  Issue Type: Bug
>Reporter: Ryan Skraba
>Assignee: Zakelly Lan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> * 1.20 Java 11 / Test (module: tests) 
> https://github.com/apache/flink/actions/runs/9217608897/job/25360103124#step:10:8641
> {{ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper}}
>  throws a NullPointerException when it tries to restore state handles: 
> {code}
> Error: 02:57:52 02:57:52.551 [ERROR] Tests run: 48, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 268.6 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase
> Error: 02:57:52 02:57:52.551 [ERROR] 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper[RestoreMode
>  = CLAIM] -- Time elapsed: 3.145 s <<< ERROR!
> May 24 02:57:52 org.apache.flink.runtime.JobException: Recovery is suppressed 
> by NoRestartBackoffTimeStrategy
> May 24 02:57:52   at 
> org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:219)
> May 24 02:57:52   at 
> org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailureAndReport(ExecutionFailureHandler.java:166)
> May 24 02:57:52   at 
> org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:121)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:279)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:270)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:263)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:788)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:765)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
> May 24 02:57:52   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:496)
> May 24 02:57:52   at 
> jdk.internal.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> May 24 02:57:52   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 24 02:57:52   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:318)
> May 24 02:57:52   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:316)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:229)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
> May 24 02:57:52   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
> May 24 02:57:52   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
> May 24 02:57:52   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
> May 24 02:57:52   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
> May 24 02:57:52   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
> May 24 02:57:52   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
> May 24 02:57:52   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
> May 24 02:57:52   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
> May 24 02:57:52   at 
> org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
> May 24 02:57:52   at 

[jira] [Commented] (FLINK-35446) FileMergingSnapshotManagerBase throws a NullPointerException

2024-05-26 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849563#comment-17849563
 ] 

Zakelly Lan commented on FLINK-35446:
-

Merged into master via 096b4a6bef98ccff918ef3994c16b7361bca21b8

> FileMergingSnapshotManagerBase throws a NullPointerException
> 
>
> Key: FLINK-35446
> URL: https://issues.apache.org/jira/browse/FLINK-35446
> Project: Flink
>  Issue Type: Bug
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> * 1.20 Java 11 / Test (module: tests) 
> https://github.com/apache/flink/actions/runs/9217608897/job/25360103124#step:10:8641
> {{ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper}}
>  throws a NullPointerException when it tries to restore state handles: 
> {code}
> Error: 02:57:52 02:57:52.551 [ERROR] Tests run: 48, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 268.6 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase
> Error: 02:57:52 02:57:52.551 [ERROR] 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper[RestoreMode
>  = CLAIM] -- Time elapsed: 3.145 s <<< ERROR!
> May 24 02:57:52 org.apache.flink.runtime.JobException: Recovery is suppressed 
> by NoRestartBackoffTimeStrategy
> May 24 02:57:52   at 
> org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:219)
> May 24 02:57:52   at 
> org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailureAndReport(ExecutionFailureHandler.java:166)
> May 24 02:57:52   at 
> org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:121)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:279)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:270)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:263)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:788)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:765)
> May 24 02:57:52   at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
> May 24 02:57:52   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:496)
> May 24 02:57:52   at 
> jdk.internal.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> May 24 02:57:52   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 24 02:57:52   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:318)
> May 24 02:57:52   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:316)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:229)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88)
> May 24 02:57:52   at 
> org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
> May 24 02:57:52   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
> May 24 02:57:52   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
> May 24 02:57:52   at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
> May 24 02:57:52   at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
> May 24 02:57:52   at 
> org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
> May 24 02:57:52   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
> May 24 02:57:52   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
> May 24 02:57:52   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
> May 24 02:57:52   at 
> org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
> May 24 02:57:52   at 
> 

Re: [PR] [FLINK-35446] Fix NPE when disabling checkpoint file merging but restore from merged files [flink]

2024-05-26 Thread via GitHub


Zakelly closed pull request #24840: [FLINK-35446] Fix NPE when disabling 
checkpoint file merging but restore from merged files
URL: https://github.com/apache/flink/pull/24840


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35455) Non-idempotent unit tests

2024-05-26 Thread Kaiyao Ke (Jira)
Kaiyao Ke created FLINK-35455:
-

 Summary: Non-idempotent unit tests
 Key: FLINK-35455
 URL: https://issues.apache.org/jira/browse/FLINK-35455
 Project: Flink
  Issue Type: Bug
Reporter: Kaiyao Ke


We found that several unit tests are non-idempotent, as they pass in the first 
run but fail in the second run. These will not be triggered by the CI since 
Surefire only executes a test once by default. A fix is necessary since unit 
tests shall be self-contained, ensuring that the state of the system under test 
is consistent at the beginning of each test. In practice, fixing non-idempotent 
tests can help proactively avoid state pollution that results in test order 
dependency (which could cause problems under test selection , prioritization or 
parallelization).

An example of a non-idempotent test:
`SerializerConfigImplTest#testLoadingTypeInfoFactoriesFromSerializationConfig`
This test fails in repeated runs because a typeInfoFactory for type `class 
org.apache.flink.api.common.serialization.SerializerConfigImplTest` is already 
registered in the first test execution.

Reproduce (using the `NIOInspector` plugin because Surefire does not support 
re-running a specific test twice without changing the source code):
```
cd flink-core
mvn edu.illinois:NIOInspector:rerun 
-Dtest=org.apache.flink.api.common.serialization.SerializerConfigImplTest#testLoadingTypeInfoFactoriesFromSerializationConfig
```

Please kindly let us know if you find it necessary to resolve test 
non-idempotency. We would open PRs to fix tests like this if so.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-24298] Add GCP PubSub Sink API Implementation, bump Flink version to 1.19.0 [flink-connector-gcp-pubsub]

2024-05-26 Thread via GitHub


snuyanzin commented on PR #27:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/27#issuecomment-2132266341

   it seems dependency convergence issue should be resolved


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-24298] Add GCP PubSub Sink API Implementation, bump Flink version to 1.19.0 [flink-connector-gcp-pubsub]

2024-05-26 Thread via GitHub


snuyanzin commented on code in PR #27:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/27#discussion_r1615240064


##
flink-connector-gcp-pubsub/src/main/java/org/apache/flink/connector/gcp/pubsub/common/PubSubConstants.java:
##
@@ -0,0 +1,11 @@
+package org.apache.flink.connector.gcp.pubsub.common;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/** Constants for PubSub. */
+@PublicEvolving
+public class PubSubConstants {

Review Comment:
   Do we really need a separate class for these constants if they are used only 
in `PubSubSinkV2Builder` and `PubSubSinkV2BuilderTest` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35424) Elasticsearch connector 8 supports SSL context

2024-05-26 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849556#comment-17849556
 ] 

Sergey Nuyanzin commented on FLINK-35424:
-

Merged as 
[50327f84d46ac2af10d352730ab0dfd0432249bd|https://github.com/apache/flink-connector-elasticsearch/commit/50327f84d46ac2af10d352730ab0dfd0432249bd]

> Elasticsearch connector 8 supports SSL context
> --
>
> Key: FLINK-35424
> URL: https://issues.apache.org/jira/browse/FLINK-35424
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
>
> In  FLINK-34369, we added SSL support for the base Elasticsearch sink class 
> that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is 
> using the AsyncSink API and it does not use the aforementioned base sink 
> class. It needs separate change to support this feature.
> This is specially important to Elasticsearch 8 which enables secure by 
> default. Meanwhile, it merits if we add integration tests for this SSL 
> context support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35424) Elasticsearch connector 8 supports SSL context

2024-05-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-35424.
-
Fix Version/s: elasticsearch-3.2.0
   Resolution: Fixed

> Elasticsearch connector 8 supports SSL context
> --
>
> Key: FLINK-35424
> URL: https://issues.apache.org/jira/browse/FLINK-35424
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.17.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: elasticsearch-3.2.0
>
>
> In  FLINK-34369, we added SSL support for the base Elasticsearch sink class 
> that is used by both Elasticsearch 6 and 7. The Elasticsearch 8 connector is 
> using the AsyncSink API and it does not use the aforementioned base sink 
> class. It needs separate change to support this feature.
> This is specially important to Elasticsearch 8 which enables secure by 
> default. Meanwhile, it merits if we add integration tests for this SSL 
> context support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35424] Elasticsearch connector 8 supports SSL context [flink-connector-elasticsearch]

2024-05-26 Thread via GitHub


snuyanzin merged PR #104:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/104


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35424] Elasticsearch connector 8 supports SSL context [flink-connector-elasticsearch]

2024-05-26 Thread via GitHub


snuyanzin commented on PR #104:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/104#issuecomment-2132254635

   Thanks for the valuable improvement @liuml07 
   thanks for the review @reta  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35435] Add timeout Configuration to Async Sink [flink]

2024-05-26 Thread via GitHub


vahmed-hamdy commented on PR #24839:
URL: https://github.com/apache/flink/pull/24839#issuecomment-2132249502

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-26 Thread via GitHub


flinkbot commented on PR #24844:
URL: https://github.com/apache/flink/pull/24844#issuecomment-2132246140

   
   ## CI report:
   
   * b9b0aad8794f5fe34bdb3a5c0d3a75826468 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35348) Implement materialized table refresh rest api

2024-05-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35348:
---
Labels: pull-request-available  (was: )

> Implement materialized table refresh rest api 
> --
>
> Key: FLINK-35348
> URL: https://issues.apache.org/jira/browse/FLINK-35348
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Gateway
>Affects Versions: 1.20.0
>Reporter: dalongliu
>Assignee: Feng Jin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-26 Thread via GitHub


hackergin opened a new pull request, #24844:
URL: https://github.com/apache/flink/pull/24844

   ## What is the purpose of the change
   
   *Introduce refresh materialized table rest api*
   
   ## Brief change log
   
 - *test-filesystem support partition.fields option*
 - *Introduce refresh materialized table rest api*
   
   ## Verifying this change
   
   * Add test case in MaterializedTableITCase to verify refresh materialzied 
table.
   * Add test class SqlGatewayRestEndpointMaterializedTableITCase to verify 
refresh materialized table rest api.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (Will be added in a separated pr.)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35354] Support host mapping in Flink tikv cdc [flink-cdc]

2024-05-26 Thread via GitHub


Mrart commented on PR #3336:
URL: https://github.com/apache/flink-cdc/pull/3336#issuecomment-2132210507

   > > I think we can specific a config prefix pass config to tikv client like 
cdc source connector pass config to debezium by `debezium.*`. This way will be 
flexible. @leonardBang @Mrart cc
   > 
   > tikv.* is configured on tikv config. hostmapping and pd config are similar 
to the database host configuration, and the current configuration looks more 
appropriate
   
   Thanks for the help review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35354] Support host mapping in Flink tikv cdc [flink-cdc]

2024-05-26 Thread via GitHub


GOODBOY008 commented on PR #3336:
URL: https://github.com/apache/flink-cdc/pull/3336#issuecomment-2132201295

   As discuss offline with @Mrart , due to ``tikvclient` config `hostMapping` 
different from others. So the solution in this PR, I think ok.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35453][core] StreamReader Charset fix with UTF8 in core files [flink]

2024-05-26 Thread via GitHub


xuzifu666 commented on PR #24842:
URL: https://github.com/apache/flink/pull/24842#issuecomment-2132165935

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35454] Allow connector classes to depend on internal Flink util classes [flink]

2024-05-26 Thread via GitHub


flinkbot commented on PR #24843:
URL: https://github.com/apache/flink/pull/24843#issuecomment-2132147690

   
   ## CI report:
   
   * 7dd3dc90fcb9c37fd0beccba69cf19586efe59f6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35435] Add timeout Configuration to Async Sink [flink]

2024-05-26 Thread via GitHub


vahmed-hamdy commented on PR #24839:
URL: https://github.com/apache/flink/pull/24839#issuecomment-2132141416

   @dannycranmer thanks for the review,  Could you have another pass? 
   
   The CI is failing due to a bug in Architecture tests FLINK-35454, 
   Could you take a look at this 
[PR](https://github.com/apache/flink/pull/24843/commits/7dd3dc90fcb9c37fd0beccba69cf19586efe59f6)
 and we can rebase this one after.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35454) Connector ArchTests fails due to dependency on fink.util.Preconditions

2024-05-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35454:
---
Labels: pull-request-available  (was: )

> Connector ArchTests fails due to dependency on fink.util.Preconditions
> --
>
> Key: FLINK-35454
> URL: https://issues.apache.org/jira/browse/FLINK-35454
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.20.0
>Reporter: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> h2. Description
>  - Arch Unit Rules for connectors limits dependencies of any classes in 
> connectors on @Public or @PublicEvolving with exceptions of connector package 
> classes, this is not true since we should be able to depend on internal util 
> classes like {{Preconditions}} and {{ExceptionsUtils}}
> {code:java}
> freeze(
> javaClassesThat(resideInAnyPackage(CONNECTOR_PACKAGES))
> .and()
> .areNotAnnotatedWith(Deprecated.class)
> .should()
> .onlyDependOnClassesThat(
>   
> areFlinkClassesThatResideOutsideOfConnectorPackagesAndArePublic()
> .or(
> 
> JavaClass.Predicates.resideOutsideOfPackages(
> 
> "org.apache.flink.."))
> .or(
> 
> JavaClass.Predicates.resideInAnyPackage(
> 
> CONNECTOR_PACKAGES)) )
> .as(
> "Connector production code must depend 
> only on public API when outside of connector packages"));
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35454] Allow connector classes to depend on internal Flink util classes [flink]

2024-05-26 Thread via GitHub


vahmed-hamdy opened a new pull request, #24843:
URL: https://github.com/apache/flink/pull/24843

   …
   
   
   
   ## What is the purpose of the change
   
   Allow connector classes to depend on internal Flink util classes in ArchUnit 
tests.
   
   
   ## Brief change log
   
   *(for example:)*
- Added internal classes in `flink.util` as allowed dependencies of 
Connector classes in ArchUnit Rule.
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   
   This change is a trivial rework / code cleanup without any test coverage, 
tested manually.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? (not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35454) Connector ArchTests fails due to dependency on fink.util.Preconditions

2024-05-26 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849505#comment-17849505
 ] 

Ahmed Hamdy commented on FLINK-35454:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59823=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=fc5181b0-e452-5c8f-68de-1097947f6483

> Connector ArchTests fails due to dependency on fink.util.Preconditions
> --
>
> Key: FLINK-35454
> URL: https://issues.apache.org/jira/browse/FLINK-35454
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.20.0
>Reporter: Ahmed Hamdy
>Priority: Major
> Fix For: 1.20.0
>
>
> h2. Description
>  - Arch Unit Rules for connectors limits dependencies of any classes in 
> connectors on @Public or @PublicEvolving with exceptions of connector package 
> classes, this is not true since we should be able to depend on internal util 
> classes like {{Preconditions}} and {{ExceptionsUtils}}
> {code:java}
> freeze(
> javaClassesThat(resideInAnyPackage(CONNECTOR_PACKAGES))
> .and()
> .areNotAnnotatedWith(Deprecated.class)
> .should()
> .onlyDependOnClassesThat(
>   
> areFlinkClassesThatResideOutsideOfConnectorPackagesAndArePublic()
> .or(
> 
> JavaClass.Predicates.resideOutsideOfPackages(
> 
> "org.apache.flink.."))
> .or(
> 
> JavaClass.Predicates.resideInAnyPackage(
> 
> CONNECTOR_PACKAGES)) )
> .as(
> "Connector production code must depend 
> only on public API when outside of connector packages"));
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35454) Connector ArchTests fails due to dependency on fink.util.Preconditions

2024-05-26 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-35454:
---

 Summary: Connector ArchTests fails due to dependency on 
fink.util.Preconditions
 Key: FLINK-35454
 URL: https://issues.apache.org/jira/browse/FLINK-35454
 Project: Flink
  Issue Type: Bug
  Components: Test Infrastructure
Affects Versions: 1.20.0
Reporter: Ahmed Hamdy
 Fix For: 1.20.0


h2. Description
 - Arch Unit Rules for connectors limits dependencies of any classes in 
connectors on @Public or @PublicEvolving with exceptions of connector package 
classes, this is not true since we should be able to depend on internal util 
classes like {{Preconditions}} and {{ExceptionsUtils}}

{code:java}
freeze(
javaClassesThat(resideInAnyPackage(CONNECTOR_PACKAGES))
.and()
.areNotAnnotatedWith(Deprecated.class)
.should()
.onlyDependOnClassesThat(
  
areFlinkClassesThatResideOutsideOfConnectorPackagesAndArePublic()
.or(

JavaClass.Predicates.resideOutsideOfPackages(

"org.apache.flink.."))
.or(

JavaClass.Predicates.resideInAnyPackage(

CONNECTOR_PACKAGES)) )
.as(
"Connector production code must depend only 
on public API when outside of connector packages"));

{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)