[jira] [Created] (FLINK-33034) Incorrect StateBackendTestBase#testGetKeysAndNamespaces

2023-09-04 Thread Dmitriy Linevich (Jira)
Dmitriy Linevich created FLINK-33034:


 Summary: Incorrect StateBackendTestBase#testGetKeysAndNamespaces
 Key: FLINK-33034
 URL: https://issues.apache.org/jira/browse/FLINK-33034
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.17.1, 1.15.0, 1.12.2
Reporter: Dmitriy Linevich
 Fix For: 1.17.1, 1.15.0
 Attachments: image-2023-09-05-12-51-28-203.png

In this test first namespace 'ns1' doesn't exist in state, because creating 
ValueState is incorrect for test. Need ti fix it to change creating ValueState 
or to change process of updating this state.

 

If to add following code for checking count of adding namespaces to state 
[here|https://github.com/apache/flink/blob/3e6a1aab0712acec3e9fcc955a28f2598f019377/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java#L501C28-L501C28]
{code:java}
assertThat(keysByNamespace.size(), is(2)); {code}
then

!image-2023-09-05-12-51-28-203.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Taher-Ghaleb commented on pull request #23264: Refactor @Test(expected) with assertThrows

2023-09-04 Thread via GitHub


Taher-Ghaleb commented on PR #23264:
URL: https://github.com/apache/flink/pull/23264#issuecomment-1705986579

   Thanks @X-czh for your response. The purpose of this pull request was to get 
insights about why such cases as exception handling still exist in codebases 
though there exist better alternatives. Your response indicates that you are 
already aware of this issue and have taken steps to improve the quality of test 
cases.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] HuangZhenQiu opened a new pull request, #663: [FLINK-32777] add config for opt out okhttp hostname verification

2023-09-04 Thread via GitHub


HuangZhenQiu opened a new pull request, #663:
URL: https://github.com/apache/flink-kubernetes-operator/pull/663

   ## What is the purpose of the change
   Currently, Flink operator can't run in IPV6 environment due to a OKhttp host 
name verification bug. To mitigate the issue reported by users, this change 
enable a config for user to choose disable host name verification in http client
   
   ## Brief change log
   - add a configuration to disble okhttp host name verification for IPV6 
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
(yes)
 - Core observer or reconciler logic that is regularly executed: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32872) Add option to control the default partitioner when the parallelism of upstream and downstream operator does not match

2023-09-04 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761944#comment-17761944
 ] 

Zhanghao Chen commented on FLINK-32872:
---

[~huweihua] Looking forward to your suggestions on this issue~

> Add option to control the default partitioner when the parallelism of 
> upstream and downstream operator does not match
> -
>
> Key: FLINK-32872
> URL: https://issues.apache.org/jira/browse/FLINK-32872
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration
>Affects Versions: 1.17.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> *Problem*
> Currently, when the no partitioner is specified, FORWARD partitioner is used 
> if the parallelism of upstream and downstream operator matches, REBALANCE 
> partitioner used otherwise. However, this behavior is not configurable and 
> can be undesirable in certain cases:
>  #  REBALANCE partitioner will create an all-to-all connection between 
> upstream and downstream operators and consume a lot of extra CPU and memory 
> resources when the parallelism is high in pipelining mode and RESCALE 
> partitioner is desirable in this case.
>  # For Flink SQL jobs, users cannot specify the partitioner directly so far. 
> And for DataStream jobs, users may not want to explicitly set the partitioner 
> everywhere.
> *Proposal*
> Add an option to control the default partitioner when the parallelism of 
> upstream and downstream operator does not match. The option can have the name 
> "pipeline.default-partitioner-with-unmatched-parallelism" with REBALANCE as 
> the default value.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32848) [JUnit5 Migration] The persistence, query, registration, rpc and shuffle packages of flink-runtime module

2023-09-04 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761941#comment-17761941
 ] 

Zhanghao Chen commented on FLINK-32848:
---

Hi [~fanrui], if you have time, could you help check the PRs? Thanks in advance!

> [JUnit5 Migration] The persistence, query, registration, rpc and shuffle 
> packages of flink-runtime module
> -
>
> Key: FLINK-32848
> URL: https://issues.apache.org/jira/browse/FLINK-32848
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Zhanghao Chen
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] X-czh commented on pull request #20232: [FLINK-25371] Include data port as part of the host info for subtask detail panel on Web UI

2023-09-04 Thread via GitHub


X-czh commented on PR #20232:
URL: https://github.com/apache/flink/pull/20232#issuecomment-1705930989

   @huwh @1996fanrui I've updated the PR with two new commits that aligns the 
behavior of pretty printing taskmanager locations and updates the Web UI 
accordingly. Please help review again when you are free, thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wangzzu commented on pull request #23243: [FLINK-32846][runtime][JUnit5 Migration] The metrics package of flink-runtime module

2023-09-04 Thread via GitHub


wangzzu commented on PR #23243:
URL: https://github.com/apache/flink/pull/23243#issuecomment-1705908682

   @Jiabao-Sun thanks, i have fixed these


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-32810) Improve managed memory usage in ListStateWithCache

2023-09-04 Thread Zhipeng Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhipeng Zhang resolved FLINK-32810.
---
Fix Version/s: ml-2.4.0
 Assignee: Fan Hong
   Resolution: Fixed

> Improve managed memory usage in ListStateWithCache
> --
>
> Key: FLINK-32810
> URL: https://issues.apache.org/jira/browse/FLINK-32810
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Machine Learning
>Reporter: Fan Hong
>Assignee: Fan Hong
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.4.0
>
>
> Right now, by default, an instance of `ListStateWithCache` uses up all the 
> managed memory allocated for`ManagedMemoryUseCase.OPERATOR`.
> It could bring some bugs in some situations, for example, when there exist 
> more than one `ListStateWithCache`s in a single operator, or there are other 
> places using managed memory of `ManagedMemoryUseCase.OPERATOR`.
>  
> An approach to resolve such cases is to let the developer be aware about the 
> usage of managed memory of `ManagedMemoryUseCase.OPERATOR`, instead of 
> implicitly use up all of it.  Therefore, I think it is better to add a 
> parameters `fraction` to specify how much memory is used in the 
> `ListStateWithCache`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33014) flink jobmanager raise java.io.IOException: Connection reset by peer

2023-09-04 Thread zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761930#comment-17761930
 ] 

zhu commented on FLINK-33014:
-

[~mapohl]  [~wangyang0918]

Is there any progress now? No operators can be executed now. Due to 
java.io.IOException: Connection reset by peer, I found that my task will prompt 
a file not found error, and there will be the following error.

 
{code:java}
2023-09-05 10:41:34,349 ERROR 
org.apache.flink.runtime.rest.handler.legacy.files.StaticFileServerHandler [] - 
Caught exception
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_372]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[?:1.8.0_372]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[?:1.8.0_372]
        at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_372]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[?:1.8.0_372]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:258)
 ~[flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
 ~[flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:357)
 ~[flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
 [flink-dist-1.17.1.jar:1.17.1]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_372]
2023-09-05 10:41:34,350 ERROR 
org.apache.flink.runtime.rest.handler.legacy.files.StaticFileServerHandler [] - 
Caught exception
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:1.8.0_372]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[?:1.8.0_372]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[?:1.8.0_372]
        at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[?:1.8.0_372]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) 
~[?:1.8.0_372]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:258)
 ~[flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
 ~[flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:357)
 ~[flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
 [flink-dist-1.17.1.jar:1.17.1]
        at 
org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
 [flink-dist-1.17.1.jar:1.17.1]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_372]
2023-09-05 10:41:34,354 ERROR 
org.apache.flink.runtime.rest.handler.legacy.files.StaticFileServerHandler [] - 
Caught exception
java.io.IOException: Connection reset by peer
   

[jira] [Created] (FLINK-33033) Add haservice micro benchmark for olap

2023-09-04 Thread Fang Yong (Jira)
Fang Yong created FLINK-33033:
-

 Summary: Add haservice micro benchmark for olap
 Key: FLINK-33033
 URL: https://issues.apache.org/jira/browse/FLINK-33033
 Project: Flink
  Issue Type: Sub-task
  Components: Benchmarks
Affects Versions: 1.19.0
Reporter: Fang Yong


Add micro benchmarks of haservice for olap to improve the performance for 
short-lived jobs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25356) Add benchmarks for performance in OLAP scenarios

2023-09-04 Thread Fang Yong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong updated FLINK-25356:
--
Parent: (was: FLINK-25318)
Issue Type: Technical Debt  (was: Sub-task)

> Add benchmarks for performance in OLAP scenarios
> 
>
> Key: FLINK-25356
> URL: https://issues.apache.org/jira/browse/FLINK-25356
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Benchmarks
>Reporter: Xintong Song
>Priority: Major
>
> As discussed in FLINK-25318, we would need a unified, public visible 
> benchmark setups, for supporting OLAP performance improvements and 
> investigations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30774) Introduce flink-utils module

2023-09-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-30774:
---
Labels: stale-major starter  (was: starter)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Introduce flink-utils module
> 
>
> Key: FLINK-30774
> URL: https://issues.apache.org/jira/browse/FLINK-30774
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: stale-major, starter
>
> Currently, utility methods generic utility classes like {{Preconditions}} or 
> {{AbstractAutoCloseableRegistry}} are collected in {{flink-core}}. The flaw 
> of this approach is that we cannot use those classes in modules like 
> {{fink-migration-test-utils}}, {{flink-test-utils-junit}}, 
> {{flink-metrics-core}} or {{flink-annotations}}.
> We might want to have a generic {{flink-utils}} analogously to 
> {{flink-test-utils}} that collects Flink-independent utility functionality 
> that can be access by any module {{flink-core}} is depending on to make this 
> utility functionality available in any Flink-related module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32454) deserializeStreamStateHandle of checkpoint read byte

2023-09-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32454:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> deserializeStreamStateHandle of checkpoint read byte
> 
>
> Key: FLINK-32454
> URL: https://issues.apache.org/jira/browse/FLINK-32454
> Project: Flink
>  Issue Type: Bug
>Reporter: Bo Cui
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> during checkpoint deserialization,  deserializeStreamStateHandle shold read 
> byte instead of int
> https://github.com/apache/flink/blob/c5acd8dd800dfcd2c8873c569d0028fc7d991b1c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/metadata/MetadataV2V3SerializerBase.java#L712



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28372) Investigate Akka Artery

2023-09-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-28372:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Investigate Akka Artery
> ---
>
> Key: FLINK-28372
> URL: https://issues.apache.org/jira/browse/FLINK-28372
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / RPC
>Reporter: Chesnay Schepler
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Our current Akka setup uses the deprecated netty-based stack. We need to 
> eventually migrate to Akka Artery.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31070) Update jline to 3.22.0

2023-09-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31070:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Update jline to 3.22.0
> --
>
> Key: FLINK-31070
> URL: https://issues.apache.org/jira/browse/FLINK-31070
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Client
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Among changes 
> https://github.com/jline/jline3/commit/1315fc0bde9325baff8bc4035dbf29184b0b79f7
>  which could simplify parse of comments in cli
> and infinite loop fix
> https://github.com/jline/jline3/commit/4dac9b0ce78a0ac37f580e708267d95553a999eb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32262) Add MAP_ENTRIES support in SQL & Table API

2023-09-04 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32262:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add MAP_ENTRIES support in SQL & Table API
> --
>
> Key: FLINK-32262
> URL: https://issues.apache.org/jira/browse/FLINK-32262
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0
>
>
> Implement the {{map_entries}} method to transform a map into an array of 
> key-value structs.
> Description: The current implementation of the {{map_entries}} method in the 
> Flink library does not provide a way to transform a map into an array of 
> key-value structs. This enhancement aims to add this functionality, allowing 
> users to convert a map into a more structured format for further processing.
> Syntax:
>  
> {code:java}
>  
> map_entries[map: Map[K, V]] -> Array[Struct] {code}
> Arguments:
>  * {{{}map{}}}: The input map to be transformed.
> Returns: An array of key-value structs obtained from the input map. Each 
> struct consists of two fields: {{key}} of type {{K}} and {{value}} of type 
> {{{}V{}}}.
> Examples:
>  # Transforming a map into key-value structs:
>  
>  
> {code:java}
> input_map = [1: 'apple', 2: 'banana', 3: 'cherry'] 
>  map_entries[input_map] 
>  Output: [{'key': 1, 'value': 'apple'}, {'key': 2, 'value': 'banana'}, 
> {'key': 3, 'value': 'cherry'}]{code}
>  # Handling an empty map:
> {code:java}
> empty_map = {} 
> map_entries[empty_map]
> Output: []{code}
> See also:
> spark:[https://spark.apache.org/docs/latest/api/sql/index.html#map_entries]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser commented on pull request #23352: [FLINK-28513][Backport][release-1.18] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


MartijnVisser commented on PR #23352:
URL: https://github.com/apache/flink/pull/23352#issuecomment-1705607330

   > Do you think we want to backport this to Flink 1.18? Given this is a 
critical bug
   
   Yes, we can safely backport bugfixes, as long as they don't break any (API) 
compatibility. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33032) [JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)

2023-09-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33032:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(ExpressionTestBase)
 Key: FLINK-33032
 URL: https://issues.apache.org/jira/browse/FLINK-33032
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.18.0
Reporter: Jiabao Sun


[JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Jiabao-Sun commented on pull request #23349: [FLINK-33023][table-planner][JUnit5 Migration] Module: flink-table-planner (TableTestBase)

2023-09-04 Thread via GitHub


Jiabao-Sun commented on PR #23349:
URL: https://github.com/apache/flink/pull/23349#issuecomment-1705496284

   Hi @leonardBang.
   Would you mind to help review this PR when you have time?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #23354: [FLINK-33031][table-planner][JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)

2023-09-04 Thread via GitHub


flinkbot commented on PR #23354:
URL: https://github.com/apache/flink/pull/23354#issuecomment-1705493439

   
   ## CI report:
   
   * 8f96e6abbf9c7a12d655f0d18d64373b72e1b2a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33031) [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)

2023-09-04 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761868#comment-17761868
 ] 

Jiabao Sun commented on FLINK-33031:


PR is Ready on https://github.com/apache/flink/pull/23354

> [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)
> 
>
> Key: FLINK-33031
> URL: https://issues.apache.org/jira/browse/FLINK-33031
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Priority: Major
>
> [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Jiabao-Sun opened a new pull request, #23354: [FLINK-33031][table-planner][JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)

2023-09-04 Thread via GitHub


Jiabao-Sun opened a new pull request, #23354:
URL: https://github.com/apache/flink/pull/23354

   
   
   ## What is the purpose of the change
   
   [FLINK-33031][table-planner][JUnit5 Migration] Module: flink-table-planner 
(AggFunctionTestBase)
   
   ## Brief change log
   
   JUnit5 Migration Module: flink-table-planner (AggFunctionTestBase)
   
   
   ## Verifying this change
   
   This change is already covered by existing tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33031) [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)

2023-09-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33031:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(AggFunctionTestBase)
 Key: FLINK-33031
 URL: https://issues.apache.org/jira/browse/FLINK-33031
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.18.0
Reporter: Jiabao Sun


[JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33021) AWS nightly builds fails on architecture tests

2023-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh updated FLINK-33021:

Fix Version/s: aws-connector-4.2.0

> AWS nightly builds fails on architecture tests
> --
>
> Key: FLINK-33021
> URL: https://issues.apache.org/jira/browse/FLINK-33021
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: aws-connector-4.2.0
>Reporter: Martijn Visser
>Priority: Blocker
> Fix For: aws-connector-4.2.0
>
>
> https://github.com/apache/flink-connector-aws/actions/runs/6067488560/job/16459208589#step:9:879
> {code:java}
> Error:  Failures: 
> Error:Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests 
> should use a MiniCluster resource or extension' was violated (1 times):
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase does not 
> satisfy: only one of the following predicates match:
> * reside in a package 'org.apache.flink.runtime.*' and contain any fields 
> that are static, final, and of type InternalMiniClusterExtension and 
> annotated with @RegisterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and contain any 
> fields that are static, final, and of type MiniClusterExtension and annotated 
> with @RegisterExtension or are , and of type MiniClusterTestEnvironment and 
> annotated with @TestEnv
> * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
> @ExtendWith with class InternalMiniClusterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and is annotated 
> with @ExtendWith with class MiniClusterExtension
>  or contain any fields that are public, static, and of type 
> MiniClusterWithClientResource and final and annotated with @ClassRule or 
> contain any fields that is of type MiniClusterWithClientResource and public 
> and final and not static and annotated with @Rule
> [INFO] 
> Error:  Tests run: 21, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33021) AWS nightly builds fails on architecture tests

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761861#comment-17761861
 ] 

Hong Liang Teoh commented on FLINK-33021:
-

Triggering the nightly build now. 
https://github.com/apache/flink-connector-aws/actions/runs/6067488560

> AWS nightly builds fails on architecture tests
> --
>
> Key: FLINK-33021
> URL: https://issues.apache.org/jira/browse/FLINK-33021
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: aws-connector-4.2.0
>Reporter: Martijn Visser
>Priority: Blocker
> Fix For: aws-connector-4.2.0
>
>
> https://github.com/apache/flink-connector-aws/actions/runs/6067488560/job/16459208589#step:9:879
> {code:java}
> Error:  Failures: 
> Error:Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests 
> should use a MiniCluster resource or extension' was violated (1 times):
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase does not 
> satisfy: only one of the following predicates match:
> * reside in a package 'org.apache.flink.runtime.*' and contain any fields 
> that are static, final, and of type InternalMiniClusterExtension and 
> annotated with @RegisterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and contain any 
> fields that are static, final, and of type MiniClusterExtension and annotated 
> with @RegisterExtension or are , and of type MiniClusterTestEnvironment and 
> annotated with @TestEnv
> * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
> @ExtendWith with class InternalMiniClusterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and is annotated 
> with @ExtendWith with class MiniClusterExtension
>  or contain any fields that are public, static, and of type 
> MiniClusterWithClientResource and final and annotated with @ClassRule or 
> contain any fields that is of type MiniClusterWithClientResource and public 
> and final and not static and annotated with @Rule
> [INFO] 
> Error:  Tests run: 21, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33021) AWS nightly builds fails on architecture tests

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761860#comment-17761860
 ] 

Hong Liang Teoh commented on FLINK-33021:
-

Ah. I only saw this now :( But I'll link the PR for reference!

> AWS nightly builds fails on architecture tests
> --
>
> Key: FLINK-33021
> URL: https://issues.apache.org/jira/browse/FLINK-33021
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: aws-connector-4.2.0
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-aws/actions/runs/6067488560/job/16459208589#step:9:879
> {code:java}
> Error:  Failures: 
> Error:Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests 
> should use a MiniCluster resource or extension' was violated (1 times):
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase does not 
> satisfy: only one of the following predicates match:
> * reside in a package 'org.apache.flink.runtime.*' and contain any fields 
> that are static, final, and of type InternalMiniClusterExtension and 
> annotated with @RegisterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and contain any 
> fields that are static, final, and of type MiniClusterExtension and annotated 
> with @RegisterExtension or are , and of type MiniClusterTestEnvironment and 
> annotated with @TestEnv
> * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
> @ExtendWith with class InternalMiniClusterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and is annotated 
> with @ExtendWith with class MiniClusterExtension
>  or contain any fields that are public, static, and of type 
> MiniClusterWithClientResource and final and annotated with @ClassRule or 
> contain any fields that is of type MiniClusterWithClientResource and public 
> and final and not static and annotated with @Rule
> [INFO] 
> Error:  Tests run: 21, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] hlteoh37 merged pull request #92: [hotfix] Add MiniClusterExtension to ITCase tests

2023-09-04 Thread via GitHub


hlteoh37 merged PR #92:
URL: https://github.com/apache/flink-connector-aws/pull/92


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761856#comment-17761856
 ] 

Hong Liang Teoh edited comment on FLINK-28513 at 9/4/23 3:17 PM:
-

{quote}I dont have access to update the fix version . Please help updating the 
`Fix Version/s` for this issue
{quote}
Updated


was (Author: hong):
> I dont have access to update the fix version . Please help updating the `Fix 
> Version/s` for this issue
 
Updated

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0, 1.17.2, 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh updated FLINK-28513:

Fix Version/s: 1.18.0
   1.17.2

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0, 1.17.2, 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761856#comment-17761856
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

> I dont have access to update the fix version . Please help updating the `Fix 
> Version/s` for this issue
 
Updated

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0, 1.17.2, 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761855#comment-17761855
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

merged commit 
[{{d06a297}}|https://github.com/apache/flink/commit/d06a297422fd4884aa21655fdf1f1bce94cc3a8a]
 into apache:release-1.17

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] hlteoh37 commented on pull request #23352: [FLINK-28513][Backport][release-1.18] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


hlteoh37 commented on PR #23352:
URL: https://github.com/apache/flink/pull/23352#issuecomment-1705433546

   @MartijnVisser  Do you think we want to backport this to Flink 1.18? Given 
this is a critical bug


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hlteoh37 merged pull request #23351: [FLINK-28513][Backport][release-1.17] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


hlteoh37 merged PR #23351:
URL: https://github.com/apache/flink/pull/23351


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33020) OpensearchSinkTest.testAtLeastOnceSink timed out

2023-09-04 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761839#comment-17761839
 ] 

Andriy Redko commented on FLINK-33020:
--

[~martijnvisser] yes, I think it happens from time to time because of the async 
flow this test exercise, it might be flaky from sometimes, will take a look 
shortly.

> OpensearchSinkTest.testAtLeastOnceSink timed out
> 
>
> Key: FLINK-33020
> URL: https://issues.apache.org/jira/browse/FLINK-33020
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-opensearch/actions/runs/6061205003/job/16446139552#step:13:1029
> {code:java}
> Error:  Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.837 
> s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.opensearch.OpensearchSinkTest
> Error:  
> org.apache.flink.streaming.connectors.opensearch.OpensearchSinkTest.testAtLeastOnceSink
>   Time elapsed: 5.022 s  <<< ERROR!
> java.util.concurrent.TimeoutException: testAtLeastOnceSink() timed out after 
> 5 seconds
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.createTimeoutException(TimeoutInvocation.java:70)
>   at 
> org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:59)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>   at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
>   at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
>   at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
>   at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>   at 
> 

[jira] [Assigned] (FLINK-33029) Drop python 3.7 support

2023-09-04 Thread Gabor Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi reassigned FLINK-33029:
-

Assignee: Gabor Somogyi

> Drop python 3.7 support
> ---
>
> Key: FLINK-33029
> URL: https://issues.apache.org/jira/browse/FLINK-33029
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Gabor Somogyi
>Assignee: Gabor Somogyi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33030) Add python 3.11 support

2023-09-04 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-33030:
-

 Summary: Add python 3.11 support
 Key: FLINK-33030
 URL: https://issues.apache.org/jira/browse/FLINK-33030
 Project: Flink
  Issue Type: New Feature
  Components: API / Python
Affects Versions: 1.19.0
Reporter: Gabor Somogyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33029) Drop python 3.7 support

2023-09-04 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-33029:
-

 Summary: Drop python 3.7 support
 Key: FLINK-33029
 URL: https://issues.apache.org/jira/browse/FLINK-33029
 Project: Flink
  Issue Type: New Feature
  Components: API / Python
Affects Versions: 1.19.0
Reporter: Gabor Somogyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33018) GCP Pubsub PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream failed

2023-09-04 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761832#comment-17761832
 ] 

Ryan Skraba commented on FLINK-33018:
-

_If I understand correctly_ -- the *"C"* element is meant to be acked but not 
emitted in this test.  This (third) end-of-stream message indicates that 
processing is finished, so it _should_ have two emitted elements (A and B) but 
three acknowledged ids (1, 2, and 3).

I'm reading through the code, and I don't believe there's any relevant changes 
due to the bom bump.  Is this always reproducible in the build environment or 
is it flaky?  If flaky -- how flaky?  Often or rarely?

I don't have any good hypothesis for the moment. Maybe there's a 
synchronization issue on the private test class [collecting the 
acks|https://github.com/apache/flink-connector-gcp-pubsub/blob/c06e25a97c5f74873db9d272a4f2cf4787185c4a/flink-connector-gcp-pubsub/src/test/java/org/apache/flink/streaming/connectors/gcp/pubsub/PubSubConsumingTest.java#L214]?
  That doesn't seem quite right, since the {{notifyCheckpointComplete}} should 
have [already 
occurred|https://github.com/apache/flink-connector-gcp-pubsub/blob/c06e25a97c5f74873db9d272a4f2cf4787185c4a/flink-connector-gcp-pubsub/src/main/java/org/apache/flink/streaming/connectors/gcp/pubsub/common/AcknowledgeOnCheckpoint.java#L84]
 just before the assert.

Any other clues that might help narrow this down?

> GCP Pubsub 
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream
>  failed
> 
>
> Key: FLINK-33018
> URL: https://issues.apache.org/jira/browse/FLINK-33018
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: gcp-pubsub-3.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/6061318336/job/16446392844#step:13:507
> {code:java}
> [INFO] 
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream:119
>  
> expected: ["1", "2", "3"]
>  but was: ["1", "2"]
> [INFO] 
> Error:  Tests run: 30, Failures: 1, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-16686) [State TTL] Make user class loader available in native RocksDB compaction thread

2023-09-04 Thread Alexis Sarda-Espinosa (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761031#comment-17761031
 ] 

Alexis Sarda-Espinosa edited comment on FLINK-16686 at 9/4/23 1:30 PM:
---

EDIT: after some more validation, it seems my (de)serialization logic had a 
bug, after fixing that, I didn't get any exception after 3.75 hours.

I wanted to check if this problem occurs with Avro, so I did a similar 
experiment as the one I did before with Kryo. I only considered 
{{GenericRecord}} serialization for now, -but I can confirm this problem also 
occurs in that case-.

I basically implemented a custom serializer, similar to what's described [in 
this 
example|https://medium.com/wbaa/evolve-your-data-model-in-flinks-state-using-avro-f26982afa399],
 -and after running a dummy job for 2 hours I got this exception-:

{noformat}
AvroEmbeddedRocksDBTest > avroSerialization() STANDARD_ERROR
java.lang.NullPointerException
at org.apache.avro.Schema.applyAliases(Schema.java:1872)
at 
org.apache.avro.generic.GenericDatumReader.getResolver(GenericDatumReader.java:131)
at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
at 
com.avro.flink.GenericRecordTypeSerializer.deserialize(GenericRecordTypeSerializer.kt:56)
at 
com.avro.flink.GenericRecordTypeSerializer.deserialize(GenericRecordTypeSerializer.kt:16)
at 
org.apache.flink.api.common.typeutils.CompositeSerializer.deserialize(CompositeSerializer.java:156)
at 
org.apache.flink.contrib.streaming.state.ttl.RocksDbTtlCompactFiltersManager$ListElementFilter.nextElementLastAccessTimestamp(RocksDbTtlCompactFiltersManager.java:205)
at 
org.apache.flink.contrib.streaming.state.ttl.RocksDbTtlCompactFiltersManager$ListElementFilter.nextUnexpiredOffset(RocksDbTtlCompactFiltersManager.java:191)
{noformat}

Note that this stacktrace does _not_ mention a classloader. So the original 
issue could indeed by specific to Kryo.


was (Author: asardaes):
cc [~yunta]

I wanted to check if this problem occurs with Avro, so I did a similar 
experiment as the one I did before with Kryo. I only considered 
{{GenericRecord}} serialization for now, but I can confirm this problem also 
occurs in that case.

I basically implemented a custom serializer, similar to what's described [in 
this 
example|https://medium.com/wbaa/evolve-your-data-model-in-flinks-state-using-avro-f26982afa399],
 and after running a dummy job for 2 hours I got this exception:

{noformat}
AvroEmbeddedRocksDBTest > avroSerialization() STANDARD_ERROR
java.lang.NullPointerException
at org.apache.avro.Schema.applyAliases(Schema.java:1872)
at 
org.apache.avro.generic.GenericDatumReader.getResolver(GenericDatumReader.java:131)
at 
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152)
at 
com.avro.flink.GenericRecordTypeSerializer.deserialize(GenericRecordTypeSerializer.kt:56)
at 
com.avro.flink.GenericRecordTypeSerializer.deserialize(GenericRecordTypeSerializer.kt:16)
at 
org.apache.flink.api.common.typeutils.CompositeSerializer.deserialize(CompositeSerializer.java:156)
at 
org.apache.flink.contrib.streaming.state.ttl.RocksDbTtlCompactFiltersManager$ListElementFilter.nextElementLastAccessTimestamp(RocksDbTtlCompactFiltersManager.java:205)
at 
org.apache.flink.contrib.streaming.state.ttl.RocksDbTtlCompactFiltersManager$ListElementFilter.nextUnexpiredOffset(RocksDbTtlCompactFiltersManager.java:191)
{noformat}

Note that this stacktrace does _not_ mention a classloader. Some more details 
from the dump in case they are interesting:

{noformat}
Current thread (0x7f7ae80b0800):  JavaThread "Thread-9949" [_thread_in_vm, 
id=9258, stack(0x7f7a59a01000,0x7f7a5a20)]

Stack: [0x7f7a59a01000,0x7f7a5a20],  sp=0x7f7a5a1fcdc0,  free 
space=8175k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, 
Vv=VM code, C=native code)
V  [libjvm.so+0x7b8f75]
V  [libjvm.so+0x980aab]
C  [librocksdbjni-linux64.so+0x222ce1]  
JavaListElementFilter::NextUnexpiredOffset(rocksdb::Slice const&, long, long) 
const+0x121
C  [librocksdbjni-linux64.so+0x648941]  
rocksdb::flink::FlinkCompactionFilter::ListDecide(rocksdb::Slice const&, 
std::string*) const+0x81
C  [librocksdbjni-linux64.so+0x648de7]  
rocksdb::flink::FlinkCompactionFilter::FilterV2(int, rocksdb::Slice const&, 
rocksdb::CompactionFilter::ValueType, rocksdb::Slice const&, std::string*, 
std::string*) const+0xc7
C  [librocksdbjni-linux64.so+0x3cce72]  
rocksdb::MergeHelper::FilterMerge(rocksdb::Slice const&, rocksdb::Slice 
const&)+0x172
C  [librocksdbjni-linux64.so+0x3cdf06]  
rocksdb::MergeHelper::MergeUntil(rocksdb::InternalIteratorBase*,
 rocksdb::CompactionRangeDelAggregator*, unsigned long, bool, bool)+0xcc6
C  

[GitHub] [flink] flinkbot commented on pull request #23353: [FLINK-33024][table-planner][JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)

2023-09-04 Thread via GitHub


flinkbot commented on PR #23353:
URL: https://github.com/apache/flink/pull/23353#issuecomment-1705260690

   
   ## CI report:
   
   * d852cff5a660222835a26bc54d0c195c9e69c1b2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33024) [JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)

2023-09-04 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761824#comment-17761824
 ] 

Jiabao Sun commented on FLINK-33024:


PR is ready on https://github.com/apache/flink/pull/23353

> [JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)
> -
>
> Key: FLINK-33024
> URL: https://issues.apache.org/jira/browse/FLINK-33024
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Jiabao Sun
>Priority: Major
>
> [JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Jiabao-Sun opened a new pull request, #23353: [FLINK-33024][table-planner][JUnit5 Migration] Module: flink-table-planner (JsonPlanTestBase)

2023-09-04 Thread via GitHub


Jiabao-Sun opened a new pull request, #23353:
URL: https://github.com/apache/flink/pull/23353

   
   
   ## What is the purpose of the change
   
   [FLINK-33024][table-planner][JUnit5 Migration] Module: flink-table-planner 
(JsonPlanTestBase)
   ## Brief change log
   
   JUnit5 Migration Module: flink-table-planner (JsonPlanTestBase)
   
   
   ## Verifying this change
   
   This change is already covered by existing tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32988) HiveITCase failed due to TestContainer not coming up

2023-09-04 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761820#comment-17761820
 ] 

Matthias Pohl commented on FLINK-32988:
---

[~fsk119] There is a temporal dependency between the test failures appearing 
and FLINK-32731. I linked the issue.

> HiveITCase failed due to TestContainer not coming up
> 
>
> Key: FLINK-32988
> URL: https://issues.apache.org/jira/browse/FLINK-32988
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.2, 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=15866
> {code}
> Aug 29 02:47:56 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hive3.1-hive:10
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Aug 29 02:47:56   at 
> org.apache.flink.tests.hive.containers.HiveContainer.doStart(HiveContainer.java:81)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Aug 29 02:47:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 29 02:47:56   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32667) Use standalone store and embedding writer for jobs with no-restart-strategy in session cluster

2023-09-04 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761812#comment-17761812
 ] 

Matthias Pohl commented on FLINK-32667:
---

Sounds reasonable. (y) fyi: I might not be that responsive the next 3 weeks

> Use standalone store and embedding writer for jobs with no-restart-strategy 
> in session cluster
> --
>
> Key: FLINK-32667
> URL: https://issues.apache.org/jira/browse/FLINK-32667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
>
> When a flink session cluster use zk or k8s high availability service, it will 
> store jobs in zk or ConfigMap. When we submit flink olap jobs to the session 
> cluster, they always turn off restart strategy. These jobs with 
> no-restart-strategy should not be stored in zk or ConfigMap in k8s



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33028) FLIP-348: Make expanding behavior of virtual metadata columns configurable

2023-09-04 Thread Timo Walther (Jira)
Timo Walther created FLINK-33028:


 Summary: FLIP-348: Make expanding behavior of virtual metadata 
columns configurable
 Key: FLINK-33028
 URL: https://issues.apache.org/jira/browse/FLINK-33028
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Planner
Reporter: Timo Walther
Assignee: Timo Walther


Many SQL vendors expose additional metadata via so-called "pseudo columns" or 
"system columns" next to the physical columns.

However, those columns should not be selected by default when expanding SELECT 
*.  Also for the sake of backward compatibility. Flink SQL already offers 
pseudo columns next to the physical columns exposed as metadata columns.

This proposal suggests to evolve the existing column design slightly to be more 
useful for platform providers.

https://cwiki.apache.org/confluence/x/_o6zDw




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32667) Use standalone store and embedding writer for jobs with no-restart-strategy in session cluster

2023-09-04 Thread Fang Yong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761807#comment-17761807
 ] 

Fang Yong commented on FLINK-32667:
---

[~mapohl] As we discussed in the threads 
https://lists.apache.org/thread/2wj1wfzcg162534v8olqt18y2x9x99od and 
https://lists.apache.org/thread/szdr4ngrfcmo7zko4917393zbqhgw0v5, we have came 
to an agreement about flink olap and [~jark] has already added the relevant 
content in flink roadmap https://flink.apache.org/roadmap/ , so I think we can 
continue to promote olap related issues.

Regarding this issue, I would like to synchronize our current progress. We do a 
simple e2e test internally and we found that the latency of the simplest query 
with HA is 6.5 times higher than that without HA. I think we can add this olap 
micro benchmarks in flink-benchmarks for HA service and then continue this 
issue. After that, we can add a micro benchmark for the new interfaces to 
compare to the previous one. What do you think? Thanks

> Use standalone store and embedding writer for jobs with no-restart-strategy 
> in session cluster
> --
>
> Key: FLINK-32667
> URL: https://issues.apache.org/jira/browse/FLINK-32667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
>
> When a flink session cluster use zk or k8s high availability service, it will 
> store jobs in zk or ConfigMap. When we submit flink olap jobs to the session 
> cluster, they always turn off restart strategy. These jobs with 
> no-restart-strategy should not be stored in zk or ConfigMap in k8s



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32962) Failure to install python dependencies from requirements file

2023-09-04 Thread Aleksandr Pilipenko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761797#comment-17761797
 ] 

Aleksandr Pilipenko commented on FLINK-32962:
-

cc [~dianfu], [~hxbks2ks]

> Failure to install python dependencies from requirements file
> -
>
> Key: FLINK-32962
> URL: https://issues.apache.org/jira/browse/FLINK-32962
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.4, 1.16.2, 1.18.0, 1.17.1
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
>
> We have encountered an issue when Flink fails to install python dependencies 
> from requirements file if python environment contains setuptools dependency 
> version 67.5.0 or above.
> Flink job fails with following error:
> {code:java}
> py4j.protocol.Py4JJavaError: An error occurred while calling o118.await.
> : java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118)
> at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81)
> ...
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: 2ca4026944022ac4537c503464d4c47f)
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> at 
> org.apache.flink.table.api.internal.InsertResultProvider.hasNext(InsertResultProvider.java:83)
> ... 6 more
> Caused by: org.apache.flink.client.program.ProgramInvocationException: Job 
> failed (JobID: 2ca4026944022ac4537c503464d4c47f)
> at 
> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:130)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> ...
> Caused by: java.io.IOException: java.io.IOException: Failed to execute the 
> command: /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install 
> --ignore-installed -r 
> /var/folders/rb/q_3h54_94b57gz_qkbp593vwgn/T/tm_localhost:63904-4178d3/blobStorage/job_2ca4026944022ac4537c503464d4c47f/blob_p-feb04bed919e628d98b1ef085111482f05bf43a1-1f5c3fda7cbdd6842e950e04d9d807c5
>  --install-option 
> --prefix=/var/folders/rb/q_3h54_94b57gz_qkbp593vwgn/T/python-dist-c17912db-5aba-439f-9e39-69cb8c957bec/python-requirements
> output:
> Usage:
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
>  [package-index-options] ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] -r 
>  [package-index-options] ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
> [-e]  ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
> [-e]  ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
>  ...
> no such option: --install-option
> at 
> org.apache.flink.python.util.PythonEnvironmentManagerUtils.pipInstallRequirements(PythonEnvironmentManagerUtils.java:144)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.installRequirements(AbstractPythonEnvironmentManager.java:215)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.lambda$open$0(AbstractPythonEnvironmentManager.java:126)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.createResource(AbstractPythonEnvironmentManager.java:435)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.getOrAllocateSharedResource(AbstractPythonEnvironmentManager.java:402)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.open(AbstractPythonEnvironmentManager.java:114)
> at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:238)
> at 
> org.apache.flink.streaming.api.operators.python.process.AbstractExternalPythonFunctionOperator.open(AbstractExternalPythonFunctionOperator.java:57)
> at 
> 

[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761794#comment-17761794
 ] 

Samrat Deb commented on FLINK-28513:


I dont have access to update the fix version . Please help updating the `Fix 
Version/s` for this issue

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23352: [FLINK-28513] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


flinkbot commented on PR #23352:
URL: https://github.com/apache/flink/pull/23352#issuecomment-1705123599

   
   ## CI report:
   
   * e7167b273667e84ea7b01434ededc8e6aca0c8c5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Samrat Deb (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761790#comment-17761790
 ] 

Samrat Deb commented on FLINK-28513:


Hi [~liangtl] 

backport for 1.17 :- [https://github.com/apache/flink/pull/23351]
backport for 1.18 :-  [https://github.com/apache/flink/pull/23352]

Please review whenever time 

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23351: [FLINK-28513] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


flinkbot commented on PR #23351:
URL: https://github.com/apache/flink/pull/23351#issuecomment-1705116714

   
   ## CI report:
   
   * 97c206b9ef38a715912a103763727af726f020c2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Samrat002 opened a new pull request, #23352: [FLINK-28513] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


Samrat002 opened a new pull request, #23352:
URL: https://github.com/apache/flink/pull/23352

   ## What is the purpose of the change
   Backport https://github.com/apache/flink/pull/21458 for 1.17


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Jiabao-Sun commented on a diff in pull request #23243: [FLINK-32846][runtime][JUnit5 Migration] The metrics package of flink-runtime module

2023-09-04 Thread via GitHub


Jiabao-Sun commented on code in PR #23243:
URL: https://github.com/apache/flink/pull/23243#discussion_r1314789249


##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroupTest.java:
##
@@ -75,30 +70,30 @@ protected String getGroupName(CharacterFilter filter) {
 return "";
 }
 };
-assertTrue(group.getAllVariables().isEmpty());
+assertThat(group.getAllVariables().isEmpty()).isTrue();
 
 registry.closeAsync().get();
 }
 
 @Test
-public void testGetAllVariablesWithOutExclusions() {
+void testGetAllVariablesWithOutExclusions() {
 MetricRegistry registry = NoOpMetricRegistry.INSTANCE;
 
 AbstractMetricGroup group = new ProcessMetricGroup(registry, 
"host");
-assertThat(group.getAllVariables(), 
IsMapContaining.hasKey(ScopeFormat.SCOPE_HOST));
+
assertThat(group.getAllVariables()).containsKey(ScopeFormat.SCOPE_HOST);
 }
 
 @Test
-public void testGetAllVariablesWithExclusions() {
+void testGetAllVariablesWithExclusions() {
 MetricRegistry registry = NoOpMetricRegistry.INSTANCE;
 
 AbstractMetricGroup group = new ProcessMetricGroup(registry, 
"host");
-assertEquals(
-group.getAllVariables(-1, 
Collections.singleton(ScopeFormat.SCOPE_HOST)).size(), 0);
+assertThat(group.getAllVariables(-1, 
Collections.singleton(ScopeFormat.SCOPE_HOST)))
+.hasSize(0);

Review Comment:
   ```suggestion
   .isEmpty();
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/JobManagerJobGroupTest.java:
##
@@ -88,17 +85,17 @@ public void testGenerateScopeCustomWildcard() throws 
Exception {
 JobManagerMetricGroup.createJobManagerMetricGroup(registry, 
"theHostName")
 .addJob(jid, "myJobName");
 
-assertArrayEquals(
-new String[] {"peter", "some-constant", jid.toString()},
-jmGroup.getScopeComponents());
+assertThat(jmGroup.getScopeComponents())
+.containsAnyOf("peter", "some-constant", jid.toString());

Review Comment:
   ```suggestion
   .containsExactly("peter", "some-constant", jid.toString());
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/TaskManagerJobGroupTest.java:
##
@@ -86,15 +82,14 @@ public void testGenerateScopeCustom() throws Exception {
 registry, "theHostName", new ResourceID("test-tm-id"));
 JobMetricGroup jmGroup = new TaskManagerJobMetricGroup(registry, 
tmGroup, jid, "myJobName");
 
-assertArrayEquals(
-new String[] {"some-constant", "myJobName"}, 
jmGroup.getScopeComponents());
+
assertThat(jmGroup.getScopeComponents()).containsAnyOf("some-constant", 
"myJobName");

Review Comment:
   ```suggestion
   
assertThat(jmGroup.getScopeComponents()).containsExactly("some-constant", 
"myJobName");
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/JobManagerJobGroupTest.java:
##
@@ -45,17 +43,17 @@ public void testGenerateScopeDefault() throws Exception {
 JobManagerMetricGroup.createJobManagerMetricGroup(registry, 
"theHostName")
 .addJob(new JobID(), "myJobName");
 
-assertArrayEquals(
-new String[] {"theHostName", "jobmanager", "myJobName"},
-jmGroup.getScopeComponents());
+assertThat(jmGroup.getScopeComponents())
+.containsAnyOf("theHostName", "jobmanager", "myJobName");

Review Comment:
   ```suggestion
   .containsExactly("theHostName", "jobmanager", "myJobName");
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/MetricGroupTest.java:
##
@@ -87,14 +79,14 @@ public void sameGroupOnNameCollision() {
 MetricGroup subgroup1 = group.addGroup(groupName);
 MetricGroup subgroup2 = group.addGroup(groupName);
 
-assertNotNull(subgroup1);
-assertNotNull(subgroup2);
-assertTrue(subgroup1 == subgroup2);
+assertThat(subgroup1).isNotNull();
+assertThat(subgroup2).isNotNull();
+assertThat(subgroup1).isEqualTo(subgroup2);

Review Comment:
   ```sugestion
  assertThat(subgroup1).isSameAs(subgroup2);
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/TaskMetricGroupTest.java:
##
@@ -107,20 +100,17 @@ public void testGenerateScopeCustom() throws Exception {
 TaskMetricGroup taskGroup =
 tmGroup.addJob(jid, "myJobName").addTask(executionId, 
"aTaskName");
 
-assertArrayEquals(
-new String[] {
-"test-tm-id", jid.toString(), vertexId.toString(), 
executionId.toString()
-},
-  

[GitHub] [flink] Samrat002 opened a new pull request, #23351: [FLINK-28513] Fix Flink Table API CSV streaming sink throws

2023-09-04 Thread via GitHub


Samrat002 opened a new pull request, #23351:
URL: https://github.com/apache/flink/pull/23351

   
   
   ## What is the purpose of the change
   Backport https://github.com/apache/flink/pull/21458 for 1.17
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wangzzu commented on a diff in pull request #23265: [FLINK-32853][runtime][JUnit5 Migration] The security, taskmanager an…

2023-09-04 Thread via GitHub


wangzzu commented on code in PR #23265:
URL: https://github.com/apache/flink/pull/23265#discussion_r1314803293


##
flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskCancelAsyncProducerConsumerITCase.java:
##
@@ -187,11 +185,11 @@ public void 
testCancelAsyncProducerAndConsumer(@InjectMiniCluster MiniCluster fl
 .get(deadline.timeLeft().toMillis(), TimeUnit.MILLISECONDS);
 
 // Verify the expected Exceptions
-assertNotNull(ASYNC_PRODUCER_EXCEPTION);
-assertEquals(CancelTaskException.class, 
ASYNC_PRODUCER_EXCEPTION.getClass());
+assertThat(ASYNC_PRODUCER_EXCEPTION).isNotNull();
+
assertThat(ASYNC_PRODUCER_EXCEPTION).isInstanceOf(CancelTaskException.class);

Review Comment:
   
`assertThat(ASYNC_PRODUCER_EXCEPTION).isNotNull().isInstanceOf(CancelTaskException.class);`



##
flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskManagerLocationTest.java:
##
@@ -38,57 +38,61 @@ class TaskManagerLocationTest {
 
 @Test
 void testEqualsHashAndCompareTo() {
-try {
-ResourceID resourceID1 = new ResourceID("a");
-ResourceID resourceID2 = new ResourceID("b");
-ResourceID resourceID3 = new ResourceID("c");
-
-// we mock the addresses to save the times of the reverse name 
lookups
-InetAddress address1 = mock(InetAddress.class);
-when(address1.getCanonicalHostName()).thenReturn("localhost");
-when(address1.getHostName()).thenReturn("localhost");
-when(address1.getHostAddress()).thenReturn("127.0.0.1");
-when(address1.getAddress()).thenReturn(new byte[] {127, 0, 0, 1});
-
-InetAddress address2 = mock(InetAddress.class);
-when(address2.getCanonicalHostName()).thenReturn("testhost1");
-when(address2.getHostName()).thenReturn("testhost1");
-when(address2.getHostAddress()).thenReturn("0.0.0.0");
-when(address2.getAddress()).thenReturn(new byte[] {0, 0, 0, 0});
-
-InetAddress address3 = mock(InetAddress.class);
-when(address3.getCanonicalHostName()).thenReturn("testhost2");
-when(address3.getHostName()).thenReturn("testhost2");
-when(address3.getHostAddress()).thenReturn("192.168.0.1");
-when(address3.getAddress()).thenReturn(new byte[] {(byte) 192, 
(byte) 168, 0, 1});
-
-// one == four != two != three
-TaskManagerLocation one = new TaskManagerLocation(resourceID1, 
address1, 19871);
-TaskManagerLocation two = new TaskManagerLocation(resourceID2, 
address2, 19871);
-TaskManagerLocation three = new TaskManagerLocation(resourceID3, 
address3, 10871);
-TaskManagerLocation four = new TaskManagerLocation(resourceID1, 
address1, 19871);
-
-assertThat(one).isEqualTo(four);
-assertThat(one).isNotEqualTo(two);
-assertThat(one).isNotEqualTo(three);
-assertThat(two).isNotEqualTo(three);
-assertThat(three).isNotEqualTo(four);
-
-assertThat(one.compareTo(four)).isEqualTo(0);
-assertThat(four.compareTo(one)).isEqualTo(0);
-assertThat(one.compareTo(two)).isNotEqualTo(0);
-assertThat(one.compareTo(three)).isNotEqualTo(0);
-assertThat(two.compareTo(three)).isNotEqualTo(0);
-assertThat(three.compareTo(four)).isNotEqualTo(0);
-
-{
-int val = one.compareTo(two);
-assertThat(two.compareTo(one)).isEqualTo(-val);
-}
-} catch (Exception e) {
-e.printStackTrace();
-fail(e.getMessage());
-}
+assertThatCode(
+() -> {
+ResourceID resourceID1 = new ResourceID("a");
+ResourceID resourceID2 = new ResourceID("b");
+ResourceID resourceID3 = new ResourceID("c");
+
+// we mock the addresses to save the times of the 
reverse name lookups
+InetAddress address1 = mock(InetAddress.class);
+
when(address1.getCanonicalHostName()).thenReturn("localhost");
+
when(address1.getHostName()).thenReturn("localhost");
+
when(address1.getHostAddress()).thenReturn("127.0.0.1");
+when(address1.getAddress()).thenReturn(new byte[] 
{127, 0, 0, 1});
+
+InetAddress address2 = mock(InetAddress.class);
+
when(address2.getCanonicalHostName()).thenReturn("testhost1");
+
when(address2.getHostName()).thenReturn("testhost1");
+
when(address2.getHostAddress()).thenReturn("0.0.0.0");
+when(address2.getAddress()).thenReturn(new 

[GitHub] [flink] zhougit86 commented on pull request #23323: [FLINK-32738][formats] PROTOBUF format supports projection push down

2023-09-04 Thread via GitHub


zhougit86 commented on PR #23323:
URL: https://github.com/apache/flink/pull/23323#issuecomment-1705076440

   > @zhougit86 Yes, I can review this, however, I have limited spare time, 
which means that I may respond slowly.
   
   Ok, take your time. Let me know if any clarification needed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zhougit86 commented on pull request #23323: [FLINK-32738][formats] PROTOBUF format supports projection push down

2023-09-04 Thread via GitHub


zhougit86 commented on PR #23323:
URL: https://github.com/apache/flink/pull/23323#issuecomment-1705075520

   > 
   
   Ok, take your time. Let me know if any clarification needed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761787#comment-17761787
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

[~samrat007]  Could we backport this bugfix to Flink 1.17 and 1.18 as well?

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Liang Teoh resolved FLINK-28513.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28513) Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread Hong Liang Teoh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761785#comment-17761785
 ] 

Hong Liang Teoh commented on FLINK-28513:
-

 merged commit 
[{{e921489}}|https://github.com/apache/flink/commit/e921489279ca70b179521ec4619514725b061491]
 into apache:master

> Flink Table API CSV streaming sink throws SerializedThrowable exception
> ---
>
> Key: FLINK-28513
> URL: https://issues.apache.org/jira/browse/FLINK-28513
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Table SQL / API
>Affects Versions: 1.15.1
>Reporter: Jaya Ananthram
>Assignee: Samrat Deb
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
>
> Table API S3 streaming sink (CSV format) throws the following exception,
> {code:java}
> Caused by: org.apache.flink.util.SerializedThrowable: 
> S3RecoverableFsDataOutputStream cannot sync state to S3. Use persist() to 
> create a persistent recoverable intermediate point.
> at 
> org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.sync(RefCountedBufferingFileStream.java:111)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.sync(S3RecoverableFsDataOutputStream.java:129)
>  ~[flink-s3-fs-hadoop-1.15.1.jar:1.15.1]
> at org.apache.flink.formats.csv.CsvBulkWriter.finish(CsvBulkWriter.java:110) 
> ~[flink-csv-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.FileSystemTableSink$ProjectionBulkFactory$1.finish(FileSystemTableSink.java:642)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.closeForCommit(BulkPartWriter.java:64)
>  ~[flink-file-sink-common-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.closePartFile(Bucket.java:263)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.prepareBucketForCheckpointing(Bucket.java:305)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.onReceptionOfCheckpoint(Bucket.java:277)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotActiveBuckets(Buckets.java:270)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.snapshotState(Buckets.java:261)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.snapshotState(StreamingFileSinkHelper.java:87)
>  ~[flink-streaming-java-1.15.1.jar:1.15.1]
> at 
> org.apache.flink.connector.file.table.stream.AbstractStreamingWriter.snapshotState(AbstractStreamingWriter.java:129)
>  ~[flink-connector-files-1.15.1.jar:1.15.1]
> {code}
> In my table config, I am trying to read from Kafka and write to S3 (s3a) 
> using table API and checkpoint configuration using s3p (presto). Even I tried 
> with a simple datagen example instead of Kafka with local file system as 
> checkpointing (`file:///` instead of `s3p://`) and I am getting the same 
> issue. Exactly it is fails when the code triggers the checkpoint.
> Some related slack conversation and SO conversation 
> [here|https://apache-flink.slack.com/archives/C03G7LJTS2G/p1657609776339389], 
> [here|https://stackoverflow.com/questions/62138635/flink-streaming-compression-not-working-using-amazon-aws-s3-connector-streaming]
>  and 
> [here|https://stackoverflow.com/questions/72943730/flink-table-api-streaming-s3-sink-throws-serializedthrowable-exception]
> Since there is no work around for S3 table API streaming sink, I am marking 
> this as critical. if this is not a relevant severity, feel free to reduce the 
> priority. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] hlteoh37 merged pull request #21458: [FLINK-28513] Fix Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread via GitHub


hlteoh37 merged PR #21458:
URL: https://github.com/apache/flink/pull/21458


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-hbase] boring-cyborg[bot] commented on pull request #18: [FLINK-30470] HBase change groupId for flink-connector-parent

2023-09-04 Thread via GitHub


boring-cyborg[bot] commented on PR #18:
URL: 
https://github.com/apache/flink-connector-hbase/pull/18#issuecomment-1705044964

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-hbase] snuyanzin opened a new pull request, #18: [FLINK-30470] HBase change groupId for flink-connector-parent

2023-09-04 Thread via GitHub


snuyanzin opened a new pull request, #18:
URL: https://github.com/apache/flink-connector-hbase/pull/18

   The PR makes use of `org.apache.flink` for parent group id


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33008) Use KubernetesClient from JOSDK context

2023-09-04 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-33008.
--
Fix Version/s: kubernetes-operator-1.7.0
   Resolution: Fixed

merged to main 3f537b839994ec726dd72411ecc3a0475a6ab9e4

> Use KubernetesClient from JOSDK context
> ---
>
> Key: FLINK-33008
> URL: https://issues.apache.org/jira/browse/FLINK-33008
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Daren Wong
>Priority: Major
> Fix For: kubernetes-operator-1.7.0
>
>
> We are currently manually creating and passing around the KubernetesClient 
> instances. 
> This is now accessible directly from the JOSDK Context, so we should use it 
> from there to simplify the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #662: [FLINK-33008] Refactor to use KubernetesClient from JOSDK Context

2023-09-04 Thread via GitHub


gyfora merged PR #662:
URL: https://github.com/apache/flink-kubernetes-operator/pull/662


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hlteoh37 commented on pull request #21458: [FLINK-28513] Fix Flink Table API CSV streaming sink throws SerializedThrowable exception

2023-09-04 Thread via GitHub


hlteoh37 commented on PR #21458:
URL: https://github.com/apache/flink/pull/21458#issuecomment-170501

   Ok this looks good to me. Thanks for fixing and testing @Samrat002 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33027) Users should be able to change parallelism of excluded vertices

2023-09-04 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-33027:
--

 Summary: Users should be able to change parallelism of excluded 
vertices
 Key: FLINK-33027
 URL: https://issues.apache.org/jira/browse/FLINK-33027
 Project: Flink
  Issue Type: Bug
  Components: Autoscaler, Kubernetes Operator
Affects Versions: kubernetes-operator-1.6.0
Reporter: Gyula Fora
 Fix For: kubernetes-operator-1.7.0


Currently it's not possible to manually override any parallelism even for 
excluded vertices. We should allow this for manually excluded ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33027) Users should be able to change parallelism of excluded vertices

2023-09-04 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-33027:
--

Assignee: Gyula Fora

> Users should be able to change parallelism of excluded vertices
> ---
>
> Key: FLINK-33027
> URL: https://issues.apache.org/jira/browse/FLINK-33027
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Kubernetes Operator
>Affects Versions: kubernetes-operator-1.6.0
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
> Fix For: kubernetes-operator-1.7.0
>
>
> Currently it's not possible to manually override any parallelism even for 
> excluded vertices. We should allow this for manually excluded ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33018) GCP Pubsub PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream failed

2023-09-04 Thread Jayadeep Jayaraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761774#comment-17761774
 ] 

Jayadeep Jayaraman commented on FLINK-33018:


I checked and looks like the test was flaky, I re-ran it in my environment and 
it passed successfully. Can you please re-run the test

> GCP Pubsub 
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream
>  failed
> 
>
> Key: FLINK-33018
> URL: https://issues.apache.org/jira/browse/FLINK-33018
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: gcp-pubsub-3.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/6061318336/job/16446392844#step:13:507
> {code:java}
> [INFO] 
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream:119
>  
> expected: ["1", "2", "3"]
>  but was: ["1", "2"]
> [INFO] 
> Error:  Tests run: 30, Failures: 1, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23350: [FLINK-33026][doc] Fix the title of sql 'Performance Tuning' in the chinese index page

2023-09-04 Thread via GitHub


flinkbot commented on PR #23350:
URL: https://github.com/apache/flink/pull/23350#issuecomment-1704989427

   
   ## CI report:
   
   * dc0beaeff62124609ceaf9e2139c9bb7765c0d87 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx commented on a diff in pull request #23255: [FLINK-32870][network] Tiered storage supports reading multiple small buffers by reading and slicing one large buffer

2023-09-04 Thread via GitHub


TanYuxin-tyx commented on code in PR #23255:
URL: https://github.com/apache/flink/pull/23255#discussion_r1314726683


##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/PartitionFileReader.java:
##
@@ -78,4 +91,165 @@ long getPriority(
 
 /** Release the {@link PartitionFileReader}. */
 void release();
+
+/** A {@link PartialBuffer} is a part slice of a larger buffer. */
+class PartialBuffer implements Buffer {
+
+private final long fileOffset;

Review Comment:
   Because removing the `fileOffset` may significantly affect the performance, 
we added the `fileOffset` back to `PartialBuffer`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil opened a new pull request, #23350: [FLINK-33026][doc] Fix the title of sql 'Performance Tuning' in the chinese index page

2023-09-04 Thread via GitHub


lincoln-lil opened a new pull request, #23350:
URL: https://github.com/apache/flink/pull/23350

   This is a minor fix for the page title, after fix, the preview page will be:
   
![image](https://github.com/apache/flink/assets/3712895/6d95c767-ab27-4c7f-a33d-c43379d2312b)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] darenwkt commented on a diff in pull request #662: [FLINK-33008] Refactor to use KubernetesClient from JOSDK Context

2023-09-04 Thread via GitHub


darenwkt commented on code in PR #662:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/662#discussion_r1314722812


##
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/listener/FlinkResourceListener.java:
##
@@ -45,8 +44,6 @@ public interface FlinkResourceListener extends Plugin {
 interface ResourceContext> {
 R getFlinkResource();
 
-KubernetesClient getKubernetesClient();

Review Comment:
   Thanks for the comment, have addressed it in latest commit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33026) The chinese doc of sql 'Performance Tuning' has a wrong title in the index page

2023-09-04 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-33026:
---

Assignee: lincoln lee

> The chinese doc of sql 'Performance Tuning' has a wrong title in the index 
> page
> ---
>
> Key: FLINK-33026
> URL: https://issues.apache.org/jira/browse/FLINK-33026
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
> Attachments: image-2023-09-04-13-35-20-832.png, 
> image-2023-09-04-13-36-02-139.png
>
>
> The chinese doc of sql 'Performance Tuning' has a wrong title in the index 
> page
>  !image-2023-09-04-13-36-02-139.png! 
>  !image-2023-09-04-13-35-20-832.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33018) GCP Pubsub PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream failed

2023-09-04 Thread Jayadeep Jayaraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761768#comment-17761768
 ] 

Jayadeep Jayaraman commented on FLINK-33018:


Let me look into it and raise a PR

> GCP Pubsub 
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream
>  failed
> 
>
> Key: FLINK-33018
> URL: https://issues.apache.org/jira/browse/FLINK-33018
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: gcp-pubsub-3.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/6061318336/job/16446392844#step:13:507
> {code:java}
> [INFO] 
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream:119
>  
> expected: ["1", "2", "3"]
>  but was: ["1", "2"]
> [INFO] 
> Error:  Tests run: 30, Failures: 1, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33026) The chinese doc of sql 'Performance Tuning' has a wrong title in the index page

2023-09-04 Thread lincoln lee (Jira)
lincoln lee created FLINK-33026:
---

 Summary: The chinese doc of sql 'Performance Tuning' has a wrong 
title in the index page
 Key: FLINK-33026
 URL: https://issues.apache.org/jira/browse/FLINK-33026
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: lincoln lee
 Attachments: image-2023-09-04-13-35-20-832.png, 
image-2023-09-04-13-36-02-139.png

The chinese doc of sql 'Performance Tuning' has a wrong title in the index page
 !image-2023-09-04-13-36-02-139.png! 

 !image-2023-09-04-13-35-20-832.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] echauchot commented on pull request #22985: [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler

2023-09-04 Thread via GitHub


echauchot commented on PR #22985:
URL: https://github.com/apache/flink/pull/22985#issuecomment-1704964511

   @1996fanrui I addressed all  your comments and I think we are in sync. Can 
you resume you review ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33018) GCP Pubsub PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream failed

2023-09-04 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761762#comment-17761762
 ] 

Martijn Visser commented on FLINK-33018:


[~jjayadeep] I don't know, if you think that's the fix then more then happy to 
get it merged :)

> GCP Pubsub 
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream
>  failed
> 
>
> Key: FLINK-33018
> URL: https://issues.apache.org/jira/browse/FLINK-33018
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: gcp-pubsub-3.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/6061318336/job/16446392844#step:13:507
> {code:java}
> [INFO] 
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream:119
>  
> expected: ["1", "2", "3"]
>  but was: ["1", "2"]
> [INFO] 
> Error:  Tests run: 30, Failures: 1, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33021) AWS nightly builds fails on architecture tests

2023-09-04 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761761#comment-17761761
 ] 

Martijn Visser commented on FLINK-33021:


[~liangtl] Ah, I think that shouldn't be fixed with a hotfix but with a ticket 
since you basically now have a broken implementation for 1.18. 

> AWS nightly builds fails on architecture tests
> --
>
> Key: FLINK-33021
> URL: https://issues.apache.org/jira/browse/FLINK-33021
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: aws-connector-4.2.0
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-aws/actions/runs/6067488560/job/16459208589#step:9:879
> {code:java}
> Error:  Failures: 
> Error:Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests 
> should use a MiniCluster resource or extension' was violated (1 times):
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase does not 
> satisfy: only one of the following predicates match:
> * reside in a package 'org.apache.flink.runtime.*' and contain any fields 
> that are static, final, and of type InternalMiniClusterExtension and 
> annotated with @RegisterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and contain any 
> fields that are static, final, and of type MiniClusterExtension and annotated 
> with @RegisterExtension or are , and of type MiniClusterTestEnvironment and 
> annotated with @TestEnv
> * reside in a package 'org.apache.flink.runtime.*' and is annotated with 
> @ExtendWith with class InternalMiniClusterExtension
> * reside outside of package 'org.apache.flink.runtime.*' and is annotated 
> with @ExtendWith with class MiniClusterExtension
>  or contain any fields that are public, static, and of type 
> MiniClusterWithClientResource and final and annotated with @ClassRule or 
> contain any fields that is of type MiniClusterWithClientResource and public 
> and final and not static and annotated with @Rule
> [INFO] 
> Error:  Tests run: 21, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33018) GCP Pubsub PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream failed

2023-09-04 Thread Jayadeep Jayaraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761756#comment-17761756
 ] 

Jayadeep Jayaraman commented on FLINK-33018:


[~martijnvisser] - Shouldn't the assert just check for ("A","B") as mentioned - 
[https://github.com/apache/flink-connector-gcp-pubsub/blame/4fdaca7b42969d19bd939c0823afb5372c51461b/flink-connector-gcp-pubsub/src/test/java/org/apache/flink/streaming/connectors/gcp/pubsub/PubSubConsumingTest.java#L109-L119C62]
 or change the assert to include Array List to ("A","B","C")

> GCP Pubsub 
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream
>  failed
> 
>
> Key: FLINK-33018
> URL: https://issues.apache.org/jira/browse/FLINK-33018
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: gcp-pubsub-3.0.2
>Reporter: Martijn Visser
>Priority: Blocker
>
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/6061318336/job/16446392844#step:13:507
> {code:java}
> [INFO] 
> [INFO] Results:
> [INFO] 
> Error:  Failures: 
> Error:
> PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream:119
>  
> expected: ["1", "2", "3"]
>  but was: ["1", "2"]
> [INFO] 
> Error:  Tests run: 30, Failures: 1, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26644) python StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies failed on azure

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761755#comment-17761755
 ] 

Sergey Nuyanzin commented on FLINK-26644:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52963=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=26957

> python 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies 
> failed on azure
> ---
>
> Key: FLINK-26644
> URL: https://issues.apache.org/jira/browse/FLINK-26644
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.4, 1.15.0, 1.16.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-03-14T18:50:24.6842853Z Mar 14 18:50:24 
> === FAILURES 
> ===
> 2022-03-14T18:50:24.6844089Z Mar 14 18:50:24 _ 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies _
> 2022-03-14T18:50:24.6844846Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6846063Z Mar 14 18:50:24 self = 
>   testMethod=test_generate_stream_graph_with_dependencies>
> 2022-03-14T18:50:24.6847104Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6847766Z Mar 14 18:50:24 def 
> test_generate_stream_graph_with_dependencies(self):
> 2022-03-14T18:50:24.6848677Z Mar 14 18:50:24 python_file_dir = 
> os.path.join(self.tempdir, "python_file_dir_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6849833Z Mar 14 18:50:24 os.mkdir(python_file_dir)
> 2022-03-14T18:50:24.6850729Z Mar 14 18:50:24 python_file_path = 
> os.path.join(python_file_dir, "test_stream_dependency_manage_lib.py")
> 2022-03-14T18:50:24.6852679Z Mar 14 18:50:24 with 
> open(python_file_path, 'w') as f:
> 2022-03-14T18:50:24.6853646Z Mar 14 18:50:24 f.write("def 
> add_two(a):\nreturn a + 2")
> 2022-03-14T18:50:24.6854394Z Mar 14 18:50:24 env = self.env
> 2022-03-14T18:50:24.6855019Z Mar 14 18:50:24 
> env.add_python_file(python_file_path)
> 2022-03-14T18:50:24.6855519Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6856254Z Mar 14 18:50:24 def plus_two_map(value):
> 2022-03-14T18:50:24.6857045Z Mar 14 18:50:24 from 
> test_stream_dependency_manage_lib import add_two
> 2022-03-14T18:50:24.6857865Z Mar 14 18:50:24 return value[0], 
> add_two(value[1])
> 2022-03-14T18:50:24.6858466Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6858924Z Mar 14 18:50:24 def add_from_file(i):
> 2022-03-14T18:50:24.6859806Z Mar 14 18:50:24 with 
> open("data/data.txt", 'r') as f:
> 2022-03-14T18:50:24.6860266Z Mar 14 18:50:24 return i[0], 
> i[1] + int(f.read())
> 2022-03-14T18:50:24.6860879Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6862022Z Mar 14 18:50:24 from_collection_source = 
> env.from_collection([('a', 0), ('b', 0), ('c', 1), ('d', 1),
> 2022-03-14T18:50:24.6863259Z Mar 14 18:50:24  
>  ('e', 2)],
> 2022-03-14T18:50:24.6864057Z Mar 14 18:50:24  
> type_info=Types.ROW([Types.STRING(),
> 2022-03-14T18:50:24.6864651Z Mar 14 18:50:24  
>  Types.INT()]))
> 2022-03-14T18:50:24.6865150Z Mar 14 18:50:24 
> from_collection_source.name("From Collection")
> 2022-03-14T18:50:24.6866212Z Mar 14 18:50:24 keyed_stream = 
> from_collection_source.key_by(lambda x: x[1], key_type=Types.INT())
> 2022-03-14T18:50:24.6867083Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6867793Z Mar 14 18:50:24 plus_two_map_stream = 
> keyed_stream.map(plus_two_map).name("Plus Two Map").set_parallelism(3)
> 2022-03-14T18:50:24.6868620Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6869412Z Mar 14 18:50:24 add_from_file_map = 
> plus_two_map_stream.map(add_from_file).name("Add From File Map")
> 2022-03-14T18:50:24.6870239Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6870883Z Mar 14 18:50:24 test_stream_sink = 
> add_from_file_map.add_sink(self.test_sink).name("Test Sink")
> 2022-03-14T18:50:24.6871803Z Mar 14 18:50:24 
> test_stream_sink.set_parallelism(4)
> 2022-03-14T18:50:24.6872291Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6872756Z Mar 14 18:50:24 archive_dir_path = 
> os.path.join(self.tempdir, "archive_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6873557Z Mar 14 18:50:24 
> os.mkdir(archive_dir_path)
> 2022-03-14T18:50:24.6874817Z Mar 14 18:50:24 with 
> open(os.path.join(archive_dir_path, "data.txt"), 'w') as f:
> 2022-03-14T18:50:24.6875414Z Mar 14 18:50:24 

[jira] [Commented] (FLINK-32803) Release Testing: Verify FLINK-32165 - Improve observability of fine-grained resource management

2023-09-04 Thread Weihua Hu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761754#comment-17761754
 ] 

Weihua Hu commented on FLINK-32803:
---

[~chesnay], Hi, is there any other scenarios you would like to testing?

> Release Testing: Verify FLINK-32165 - Improve observability of fine-grained 
> resource management
> ---
>
> Key: FLINK-32803
> URL: https://issues.apache.org/jira/browse/FLINK-32803
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Weihua Hu
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-30-10-56-09-174.png, 
> image-2023-08-30-10-57-11-049.png, image-2023-08-30-10-58-34-939.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32988) HiveITCase failed due to TestContainer not coming up

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761753#comment-17761753
 ] 

Sergey Nuyanzin commented on FLINK-32988:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52963=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=16231

> HiveITCase failed due to TestContainer not coming up
> 
>
> Key: FLINK-32988
> URL: https://issues.apache.org/jira/browse/FLINK-32988
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.2, 1.18.0, 1.17.1, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=87489130-75dc-54e4-1f45-80c30aa367a3=efbee0b1-38ac-597d-6466-1ea8fc908c50=15866
> {code}
> Aug 29 02:47:56 org.testcontainers.containers.ContainerLaunchException: 
> Container startup failed for image prestodb/hive3.1-hive:10
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:349)
> Aug 29 02:47:56   at 
> org.apache.flink.tests.hive.containers.HiveContainer.doStart(HiveContainer.java:81)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1131)
> Aug 29 02:47:56   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28)
> Aug 29 02:47:56   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Aug 29 02:47:56   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Aug 29 02:47:56   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Aug 29 02:47:56   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> Aug 29 02:47:56   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Aug 29 02:47:56   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Aug 29 02:47:56   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33025) BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails on AZP

2023-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33025:

Affects Version/s: 1.16.3

> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
>  fails on AZP
> -
>
> Key: FLINK-33025
> URL: https://issues.apache.org/jira/browse/FLINK-33025
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.3, 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
>  fails on AZP as
> {noformat}
> Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
> Sep 03 05:05:38 05:05:38.220 [ERROR]   
> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
>  
> Sep 03 05:05:38 Expected size: 4 but was: 3 in:
> Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]
> {noformat}
> probably related to FLINK-26990



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26990 ]


Sergey Nuyanzin deleted comment on FLINK-26990:
-

was (Author: sergey nuyanzin):
1.16: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52953=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=26033

> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.2
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33025) BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails on AZP

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761751#comment-17761751
 ] 

Sergey Nuyanzin commented on FLINK-33025:
-

1.16: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52953=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=26033

> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
>  fails on AZP
> -
>
> Key: FLINK-33025
> URL: https://issues.apache.org/jira/browse/FLINK-33025
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
>  fails on AZP as
> {noformat}
> Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
> Sep 03 05:05:38 05:05:38.220 [ERROR]   
> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
>  
> Sep 03 05:05:38 Expected size: 4 but was: 3 in:
> Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]
> {noformat}
> probably related to FLINK-26990



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33025) BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails on AZP

2023-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33025:

Description: 
This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
 fails on AZP as
{noformat}
Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
Sep 03 05:05:38 05:05:38.220 [ERROR]   
BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
 
Sep 03 05:05:38 Expected size: 4 but was: 3 in:
Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]

{noformat}

probably related to FLINK-26990

  was:
This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
 fails on AZP as
{noformat}
Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
Sep 03 05:05:38 05:05:38.220 [ERROR]   
BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
 
Sep 03 05:05:38 Expected size: 4 but was: 3 in:
Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]

{noformat}



> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
>  fails on AZP
> -
>
> Key: FLINK-33025
> URL: https://issues.apache.org/jira/browse/FLINK-33025
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
>  fails on AZP as
> {noformat}
> Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
> Sep 03 05:05:38 05:05:38.220 [ERROR]   
> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
>  
> Sep 03 05:05:38 Expected size: 4 but was: 3 in:
> Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]
> {noformat}
> probably related to FLINK-26990



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33025) BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails on AZP

2023-09-04 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33025:
---

 Summary: 
BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
 fails on AZP
 Key: FLINK-33025
 URL: https://issues.apache.org/jira/browse/FLINK-33025
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Sergey Nuyanzin


This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
 fails on AZP as
{noformat}
Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
Sep 03 05:05:38 05:05:38.220 [ERROR]   
BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
 
Sep 03 05:05:38 Expected size: 4 but was: 3 in:
Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]

{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33025) BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails on AZP

2023-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33025:

Affects Version/s: 1.19.0

> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
>  fails on AZP
> -
>
> Key: FLINK-33025
> URL: https://issues.apache.org/jira/browse/FLINK-33025
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
>  fails on AZP as
> {noformat}
> Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
> Sep 03 05:05:38 05:05:38.220 [ERROR]   
> BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
>  
> Sep 03 05:05:38 Expected size: 4 but was: 3 in:
> Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
> Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26515) RetryingExecutorTest. testDiscardOnTimeout failed on azure

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761749#comment-17761749
 ] 

Sergey Nuyanzin commented on FLINK-26515:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=10512

> RetryingExecutorTest. testDiscardOnTimeout failed on azure
> --
>
> Key: FLINK-26515
> URL: https://issues.apache.org/jira/browse/FLINK-26515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.3, 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
>
> {code:java}
> Mar 06 01:20:29 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 1.941 s <<< FAILURE! - in 
> org.apache.flink.changelog.fs.RetryingExecutorTest
> Mar 06 01:20:29 [ERROR] testTimeout  Time elapsed: 1.934 s  <<< FAILURE!
> Mar 06 01:20:29 java.lang.AssertionError: expected:<500.0> but 
> was:<1922.869766>
> Mar 06 01:20:29   at org.junit.Assert.fail(Assert.java:89)
> Mar 06 01:20:29   at org.junit.Assert.failNotEquals(Assert.java:835)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:555)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:685)
> Mar 06 01:20:29   at 
> org.apache.flink.changelog.fs.RetryingExecutorTest.testTimeout(RetryingExecutorTest.java:145)
> Mar 06 01:20:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 06 01:20:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 06 01:20:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 06 01:20:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Mar 06 01:20:29   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> Mar 06 01:20:29   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Mar 06 01:20:29   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Mar 06 01:20:29   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Mar 06 01:20:29   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32569=logs=f450c1a5-64b1-5955-e215-49cb1ad5ec88=cc452273-9efa-565d-9db8-ef62a38a0c10=22554



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-18356) flink-table-planner Exit code 137 returned from process

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761747#comment-17761747
 ] 

Sergey Nuyanzin commented on FLINK-18356:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52955=logs=f2c100be-250b-5e85-7bbe-176f68fcddc5=05efd11e-5400-54a4-0d27-a4663be008a9=11993

> flink-table-planner Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0
>Reporter: Piotr Nowojski
>Assignee: Yunhong Zheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: 1234.jpg, app-profiling_4.gif, 
> image-2023-01-11-22-21-57-784.png, image-2023-01-11-22-22-32-124.png, 
> image-2023-02-16-20-18-09-431.png, image-2023-07-11-19-28-52-851.png, 
> image-2023-07-11-19-35-54-530.png, image-2023-07-11-19-41-18-626.png, 
> image-2023-07-11-19-41-37-105.png
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink

2023-09-04 Thread Matt Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761745#comment-17761745
 ] 

Matt Wang edited comment on FLINK-32804 at 9/4/23 9:07 AM:
---

[~renqs] [~pgaref] hi, I have tested the functions of LIFP-304 here. Overall, 
the functions are OK, but I have also encountered some minor problems:
1. When FailureEnricherUtils loads FailureEnricher, if 
`jobmanager.failure-enrichers` is configured but not loaded, I think it should 
throw an exception or print some ERROR logs. I created 
Jira(https://issues.apache.org/jira/browse/FLINK-33022) a to follow up;
2. FLIP-304 introduces the `LabeledGlobalFailureHandler` interface. I think the 
`GlobalFailureHandler` interface can be removed in the future to avoid the 
existence of interfaces with duplicate functions. This can be promoted in 
future versions;
3. Regarding the user documentation, I left some comments. The user 
documentation is very helpful, and the code integration needs to be promoted as 
soon as possible; [~pgaref] 
4. https://issues.apache.org/jira/browse/FLINK-31895  about E2E test Here I 
think it needs to be promoted as soon as possible. If there is a Demo example, 
it will be more helpful to users.

here is my test:
First, I implemented a custom FailureEnricher and FailureEnricherFactory, and 
placed it in the plugins directory
!image-2023-09-04-16-58-29-329.png|width=898,height=330!
!image-2023-09-04-17-00-16-413.png|width=871,height=238!

Secondly, I build a test job. The test job will throw an exception when 
running. Ideally, it will hit the CustomEnricher.
!image-2023-09-04-17-02-41-760.png|width=867,height=433!

Finally, through restful api, you can see that corresponding exceptions will 
have corresponding label information.

!image-2023-09-04-17-03-25-452.png|width=875,height=20!
!image-2023-09-04-17-04-21-682.png|width=886,height=589!


was (Author: wangm92):
[~renqs] [~pgaref] hi, I have tested the functions of LIFP-304 here. Overall, 
the functions are OK, but I have also encountered some minor problems:
1. When FailureEnricherUtils loads FailureEnricher, if 
`jobmanager.failure-enrichers` is configured but not loaded, I think it should 
throw an exception or print some ERROR logs. I created a 
https://issues.apache.org /jira/browse/FLINK-33022 to follow up;
2. FLIP-304 introduces the `LabeledGlobalFailureHandler` interface. I think the 
`GlobalFailureHandler` interface can be removed in the future to avoid the 
existence of interfaces with duplicate functions. This can be promoted in 
future versions;
3. Regarding the user documentation, I left some comments. The user 
documentation is very helpful, and the code integration needs to be promoted as 
soon as possible; [~pgaref] 
4. https://issues.apache.org/jira/browse/FLINK-31895  about E2E test Here I 
think it needs to be promoted as soon as possible. If there is a Demo example, 
it will be more helpful to users.

here is my test:
First, I implemented a custom FailureEnricher and FailureEnricherFactory, and 
placed it in the plugins directory
!image-2023-09-04-16-58-29-329.png|width=898,height=330!
!image-2023-09-04-17-00-16-413.png|width=871,height=238!

Secondly, I build a test job. The test job will throw an exception when 
running. Ideally, it will hit the CustomEnricher.
!image-2023-09-04-17-02-41-760.png|width=867,height=433!

Finally, through restful api, you can see that corresponding exceptions will 
have corresponding label information.

!image-2023-09-04-17-03-25-452.png|width=875,height=20!
!image-2023-09-04-17-04-21-682.png|width=886,height=589!

> Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
> -
>
> Key: FLINK-32804
> URL: https://issues.apache.org/jira/browse/FLINK-32804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Matt Wang
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-09-04-16-58-29-329.png, 
> image-2023-09-04-17-00-16-413.png, image-2023-09-04-17-02-41-760.png, 
> image-2023-09-04-17-03-25-452.png, image-2023-09-04-17-04-21-682.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2023-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-26990:

Affects Version/s: 1.16.2
   (was: 1.16.0)

> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.2
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761746#comment-17761746
 ] 

Sergey Nuyanzin edited comment on FLINK-26990 at 9/4/23 9:07 AM:
-

1.16: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52953=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=26033


was (Author: sergey nuyanzin):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52953=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=26033

> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761746#comment-17761746
 ] 

Sergey Nuyanzin commented on FLINK-26990:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52953=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=26033

> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink

2023-09-04 Thread Matt Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Wang updated FLINK-32804:
--
Attachment: image-2023-09-04-17-04-21-682.png

> Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
> -
>
> Key: FLINK-32804
> URL: https://issues.apache.org/jira/browse/FLINK-32804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Matt Wang
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-09-04-16-58-29-329.png, 
> image-2023-09-04-17-00-16-413.png, image-2023-09-04-17-02-41-760.png, 
> image-2023-09-04-17-03-25-452.png, image-2023-09-04-17-04-21-682.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink

2023-09-04 Thread Matt Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761745#comment-17761745
 ] 

Matt Wang edited comment on FLINK-32804 at 9/4/23 9:05 AM:
---

[~renqs] [~pgaref] hi, I have tested the functions of LIFP-304 here. Overall, 
the functions are OK, but I have also encountered some minor problems:
1. When FailureEnricherUtils loads FailureEnricher, if 
`jobmanager.failure-enrichers` is configured but not loaded, I think it should 
throw an exception or print some ERROR logs. I created a 
https://issues.apache.org /jira/browse/FLINK-33022 to follow up;
2. FLIP-304 introduces the `LabeledGlobalFailureHandler` interface. I think the 
`GlobalFailureHandler` interface can be removed in the future to avoid the 
existence of interfaces with duplicate functions. This can be promoted in 
future versions;
3. Regarding the user documentation, I left some comments. The user 
documentation is very helpful, and the code integration needs to be promoted as 
soon as possible; [~pgaref] 
4. https://issues.apache.org/jira/browse/FLINK-31895  about E2E test Here I 
think it needs to be promoted as soon as possible. If there is a Demo example, 
it will be more helpful to users.

here is my test:
First, I implemented a custom FailureEnricher and FailureEnricherFactory, and 
placed it in the plugins directory
!image-2023-09-04-16-58-29-329.png|width=898,height=330!
!image-2023-09-04-17-00-16-413.png|width=871,height=238!

Secondly, I build a test job. The test job will throw an exception when 
running. Ideally, it will hit the CustomEnricher.
!image-2023-09-04-17-02-41-760.png|width=867,height=433!

Finally, through restful api, you can see that corresponding exceptions will 
have corresponding label information.

!image-2023-09-04-17-03-25-452.png|width=875,height=20!
!image-2023-09-04-17-04-21-682.png|width=886,height=589!


was (Author: wangm92):
[~renqs] [~pgaref] hi, I have tested the functions of LIFP-304 here. Overall, 
the functions are OK, but I have also encountered some minor problems:
1. When FailureEnricherUtils loads FailureEnricher, if 
`jobmanager.failure-enrichers` is configured but not loaded, I think it should 
throw an exception or print some ERROR logs. I created a [FLINK-33022 When 
FailureEnricherUtils load FailureEnricherFactory failed should throw exception 
or add some error logs](https://issues.apache.org /jira/browse/FLINK-33022) to 
follow up;
2. FLIP-304 introduces the `LabeledGlobalFailureHandler` interface. I think the 
`GlobalFailureHandler` interface can be removed in the future to avoid the 
existence of interfaces with duplicate functions. This can be promoted in 
future versions;
3. Regarding the user documentation, I left some comments. The user 
documentation is very helpful, and the code integration needs to be promoted as 
soon as possible; [~pgaref] 
4. https://issues.apache.org/jira/browse/FLINK-31895  about E2E test Here I 
think it needs to be promoted as soon as possible. If there is a Demo example, 
it will be more helpful to users.


here is my test:
First, I implemented a custom FailureEnricher and FailureEnricherFactory, and 
placed it in the plugins directory
!image-2023-09-04-16-58-29-329.png|width=898,height=330!
!image-2023-09-04-17-00-16-413.png|width=871,height=238!

Secondly, I build a test job. The test job will throw an exception when 
running. Ideally, it will hit the CustomEnricher.
!image-2023-09-04-17-02-41-760.png|width=867,height=433!

Finally, through restful api, you can see that corresponding exceptions will 
have corresponding label information.

!image-2023-09-04-17-03-25-452.png|width=875,height=20!
!image-2023-09-04-17-04-21-682.png|width=886,height=589!

> Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
> -
>
> Key: FLINK-32804
> URL: https://issues.apache.org/jira/browse/FLINK-32804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Matt Wang
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-09-04-16-58-29-329.png, 
> image-2023-09-04-17-00-16-413.png, image-2023-09-04-17-02-41-760.png, 
> image-2023-09-04-17-03-25-452.png, image-2023-09-04-17-04-21-682.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink

2023-09-04 Thread Matt Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761745#comment-17761745
 ] 

Matt Wang commented on FLINK-32804:
---

[~renqs] [~pgaref] hi, I have tested the functions of LIFP-304 here. Overall, 
the functions are OK, but I have also encountered some minor problems:
1. When FailureEnricherUtils loads FailureEnricher, if 
`jobmanager.failure-enrichers` is configured but not loaded, I think it should 
throw an exception or print some ERROR logs. I created a [FLINK-33022 When 
FailureEnricherUtils load FailureEnricherFactory failed should throw exception 
or add some error logs](https://issues.apache.org /jira/browse/FLINK-33022) to 
follow up;
2. FLIP-304 introduces the `LabeledGlobalFailureHandler` interface. I think the 
`GlobalFailureHandler` interface can be removed in the future to avoid the 
existence of interfaces with duplicate functions. This can be promoted in 
future versions;
3. Regarding the user documentation, I left some comments. The user 
documentation is very helpful, and the code integration needs to be promoted as 
soon as possible; [~pgaref] 
4. https://issues.apache.org/jira/browse/FLINK-31895  about E2E test Here I 
think it needs to be promoted as soon as possible. If there is a Demo example, 
it will be more helpful to users.


here is my test:
First, I implemented a custom FailureEnricher and FailureEnricherFactory, and 
placed it in the plugins directory
!image-2023-09-04-16-58-29-329.png|width=898,height=330!
!image-2023-09-04-17-00-16-413.png|width=871,height=238!

Secondly, I build a test job. The test job will throw an exception when 
running. Ideally, it will hit the CustomEnricher.
!image-2023-09-04-17-02-41-760.png|width=867,height=433!

Finally, through restful api, you can see that corresponding exceptions will 
have corresponding label information.

!image-2023-09-04-17-03-25-452.png|width=875,height=20!
!image-2023-09-04-17-04-21-682.png|width=886,height=589!

> Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
> -
>
> Key: FLINK-32804
> URL: https://issues.apache.org/jira/browse/FLINK-32804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Matt Wang
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-09-04-16-58-29-329.png, 
> image-2023-09-04-17-00-16-413.png, image-2023-09-04-17-02-41-760.png, 
> image-2023-09-04-17-03-25-452.png, image-2023-09-04-17-04-21-682.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26515) RetryingExecutorTest. testDiscardOnTimeout failed on azure

2023-09-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-26515:

Affects Version/s: 1.19.0

> RetryingExecutorTest. testDiscardOnTimeout failed on azure
> --
>
> Key: FLINK-26515
> URL: https://issues.apache.org/jira/browse/FLINK-26515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.3, 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
>
> {code:java}
> Mar 06 01:20:29 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 1.941 s <<< FAILURE! - in 
> org.apache.flink.changelog.fs.RetryingExecutorTest
> Mar 06 01:20:29 [ERROR] testTimeout  Time elapsed: 1.934 s  <<< FAILURE!
> Mar 06 01:20:29 java.lang.AssertionError: expected:<500.0> but 
> was:<1922.869766>
> Mar 06 01:20:29   at org.junit.Assert.fail(Assert.java:89)
> Mar 06 01:20:29   at org.junit.Assert.failNotEquals(Assert.java:835)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:555)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:685)
> Mar 06 01:20:29   at 
> org.apache.flink.changelog.fs.RetryingExecutorTest.testTimeout(RetryingExecutorTest.java:145)
> Mar 06 01:20:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 06 01:20:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 06 01:20:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 06 01:20:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Mar 06 01:20:29   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> Mar 06 01:20:29   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Mar 06 01:20:29   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Mar 06 01:20:29   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Mar 06 01:20:29   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32569=logs=f450c1a5-64b1-5955-e215-49cb1ad5ec88=cc452273-9efa-565d-9db8-ef62a38a0c10=22554



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26515) RetryingExecutorTest. testDiscardOnTimeout failed on azure

2023-09-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761744#comment-17761744
 ] 

Sergey Nuyanzin commented on FLINK-26515:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52952=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=10439

> RetryingExecutorTest. testDiscardOnTimeout failed on azure
> --
>
> Key: FLINK-26515
> URL: https://issues.apache.org/jira/browse/FLINK-26515
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.3, 1.17.0, 1.16.1, 1.18.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
>
> {code:java}
> Mar 06 01:20:29 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 1.941 s <<< FAILURE! - in 
> org.apache.flink.changelog.fs.RetryingExecutorTest
> Mar 06 01:20:29 [ERROR] testTimeout  Time elapsed: 1.934 s  <<< FAILURE!
> Mar 06 01:20:29 java.lang.AssertionError: expected:<500.0> but 
> was:<1922.869766>
> Mar 06 01:20:29   at org.junit.Assert.fail(Assert.java:89)
> Mar 06 01:20:29   at org.junit.Assert.failNotEquals(Assert.java:835)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:555)
> Mar 06 01:20:29   at org.junit.Assert.assertEquals(Assert.java:685)
> Mar 06 01:20:29   at 
> org.apache.flink.changelog.fs.RetryingExecutorTest.testTimeout(RetryingExecutorTest.java:145)
> Mar 06 01:20:29   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 06 01:20:29   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 06 01:20:29   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 06 01:20:29   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 06 01:20:29   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 06 01:20:29   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 06 01:20:29   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 06 01:20:29   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Mar 06 01:20:29   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Mar 06 01:20:29   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> Mar 06 01:20:29   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Mar 06 01:20:29   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Mar 06 01:20:29   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Mar 06 01:20:29   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Mar 06 01:20:29   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32569=logs=f450c1a5-64b1-5955-e215-49cb1ad5ec88=cc452273-9efa-565d-9db8-ef62a38a0c10=22554



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >