[jira] [Commented] (FLINK-31966) Flink Kubernetes operator lacks TLS support

2023-10-16 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776026#comment-17776026
 ] 

Gyula Fora commented on FLINK-31966:


[~tagarr] we could introduce a new config specific for the operator for the 
keystore location and only use it during rest interaction (set the original SSL 
with that). And we would use the other one for submitting the cluster. That way 
they can be different path.
What do you think?

> Flink Kubernetes operator lacks TLS support 
> 
>
> Key: FLINK-31966
> URL: https://issues.apache.org/jira/browse/FLINK-31966
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Adrian Vasiliu
>Priority: Major
>
> *Summary*
> The Flink Kubernetes operator lacks support inside the FlinkDeployment 
> operand for configuring Flink with TLS (both one-way and mutual) for the 
> internal communication between jobmanagers and taskmanagers, and for the 
> external REST endpoint. Although a workaround exists to configure the job and 
> task managers, this breaks the operator and renders it unable to reconcile.
> *Additional information*
>  * The Apache Flink operator supports passing through custom flink 
> configuration to be applied to job and task managers.
>  * If you supply SSL-based properties, the operator can no longer speak to 
> the deployed job manager. The operator is reading the flink conf and using it 
> to create a connection to the job manager REST endpoint, but it uses the 
> truststore file paths within flink-conf.yaml, which are unresolvable from the 
> operator. This leaves the operator hanging in a pending state as it cannot 
> complete a reconcile.
> *Proposal*
> Our proposal is to make changes to the operator code. A simple change exists 
> that would be enough to enable anonymous SSL at the REST endpoint, but more 
> invasive changes would be required to enable full mTLS throughout.
> The simple change to enable anonymous SSL would be for the operator to parse 
> flink-conf and podTemplate to identify the Kubernetes resource that contains 
> the certificate from the job manager keystore and use it inside the 
> operator’s trust store.
> In the case of mutual TLS, further changes are required: the operator would 
> need to generate a certificate signed by the same issuing authority as the 
> job manager’s certificates and then use it in a keystore when challenged by 
> that job manager. We propose that the operator becomes responsible for making 
> CertificateSigningRequests to generate certificates for job manager, task 
> manager and operator. The operator can then coordinate deploying the job and 
> task managers with the correct flink-conf and volume mounts. This would also 
> work for anonymous SSL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] FLINK-32107 Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-16 Thread via GitHub


flinkbot commented on PR #23528:
URL: https://github.com/apache/flink/pull/23528#issuecomment-1765703486

   
   ## CI report:
   
   * dd121ab31046b877a9a9545166939e5314d2a183 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32107) Kubernetes test failed because ofunable to establish ssl connection to github on AZP

2023-10-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32107:
---
Labels: pull-request-available starter test-stability  (was: starter 
test-stability)

> Kubernetes test failed because ofunable to establish ssl connection to github 
> on AZP
> 
>
> Key: FLINK-32107
> URL: https://issues.apache.org/jira/browse/FLINK-32107
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, starter, test-stability
>
> on AZP kubernetes test fails 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49022=logs=bea52777-eaf8-5663-8482-18fbc3630e81=43ba8ce7-ebbf-57cd-9163-444305d74117=4884
> as
> {noformat}
> 2023-05-16T03:54:54.4652330Z May 16 03:54:54 
> 2023-05-16T03:54:54.4652942Z May 16 03:54:54 [FAIL] 'Run Kubernetes test' 
> failed after 5 minutes and 37 seconds! Test exited with exit code 1
> 2023-05-16T03:54:54.4653363Z May 16 03:54:54 
> {noformat}
> in logs
> {noformat}
> 023-05-16T03:49:29.2350048Z --2023-05-16 03:49:29--  
> https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
> 2023-05-16T03:49:29.2401348Z Resolving github.com (github.com)... 140.82.121.3
> 2023-05-16T03:49:29.2519421Z Connecting to github.com 
> (github.com)|140.82.121.3|:443... connected.
> 2023-05-16T03:49:29.2636971Z Unable to establish SSL connection.
> 2023-05-16T03:49:29.2717345Z tar (child): crictl-v1.24.2-linux-amd64.tar.gz: 
> Cannot open: No such file or directory
> 2023-05-16T03:49:29.2718128Z tar (child): Error is not recoverable: exiting 
> now
> 2023-05-16T03:49:29.2720740Z tar: Child returned status 2
> 2023-05-16T03:49:29.2721169Z tar: Error is not recoverable: exiting now
> {noformat}
> and then 
> {noformat}
> 2023-05-16T03:51:19.7583853Z May 16 03:51:19 Starting minikube ...
> 2023-05-16T03:51:19.8445449Z May 16 03:51:19 * minikube v1.28.0 on Ubuntu 
> 20.04
> 2023-05-16T03:51:19.8459453Z May 16 03:51:19 * Using the none driver based on 
> user configuration
> 2023-05-16T03:51:19.8479317Z May 16 03:51:19 * Starting control plane node 
> minikube in cluster minikube
> 2023-05-16T03:51:19.8500624Z May 16 03:51:19 * Running on localhost (CPUs=2, 
> Memory=6943MB, Disk=85160MB) ...
> 2023-05-16T03:51:19.8773352Z May 16 03:51:19 * minikube 1.30.1 is available! 
> Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
> 2023-05-16T03:51:19.8784220Z May 16 03:51:19 * To disable this notice, run: 
> 'minikube config set WantUpdateNotification false'
> 2023-05-16T03:51:19.8784716Z May 16 03:51:19 
> 2023-05-16T03:51:20.3656967Z May 16 03:51:20 * OS release is Ubuntu 20.04.6 
> LTS
> 2023-05-16T03:52:21.7634993Z May 16 03:52:21 
> 2023-05-16T03:52:21.7654050Z X Exiting due to RUNTIME_ENABLE: Temporary 
> Error: sudo crictl version: exit status 1
> 2023-05-16T03:52:21.7654511Z stdout:
> 2023-05-16T03:52:21.7654700Z 
> 2023-05-16T03:52:21.7654925Z stderr:
> 2023-05-16T03:52:21.7655194Z sudo: crictl: command not found
> 2023-05-16T03:52:21.7655377Z 
> 2023-05-16T03:52:21.7655589Z * 
> 2023-05-16T03:52:21.7676462Z 
> ╭─╮
> 2023-05-16T03:52:21.7677189Z │
>  │
> 2023-05-16T03:52:21.7677684Z │* If the above advice does not help, please 
> let us know: │
> 2023-05-16T03:52:21.7678141Z │  
> https://github.com/kubernetes/minikube/issues/new/choose  
>  │
> 2023-05-16T03:52:21.7678549Z │
>  │
> 2023-05-16T03:52:21.7679208Z │* Please run `minikube logs 
> --file=logs.txt` and attach logs.txt to the GitHub issue.│
> 2023-05-16T03:52:21.7679781Z │
>  │
> 2023-05-16T03:52:21.7680268Z 
> ╰─╯
> 2023-05-16T03:52:21.7680606Z May 16 03:52:21 
> 2023-05-16T03:52:21.8422493Z E0516 03:52:21.841334  243032 root.go:80] failed 
> to log command start to audit: failed to open the audit log: open 
> /home/vsts/.minikube/logs/audit.json: permission denied
> 2023-05-16T03:52:21.8434631Z May 16 03:52:21 
> 2023-05-16T03:52:21.8447806Z X Exiting due to HOST_HOME_PERMISSION: open 
> /home/vsts/.minikube/profiles/minikube/config.json: permission denied
> 2023-05-16T03:52:21.8448801Z * Suggestion: Your user lacks 

[PR] FLINK-32107 Kubernetes test failed because ofunable to establish ssl connection to github on AZP [flink]

2023-10-16 Thread via GitHub


victor9309 opened a new pull request, #23528:
URL: https://github.com/apache/flink/pull/23528

   
   
   ## What is the purpose of the change
   
   *(We could make CI fail earlier when download fails.)*
   
   
   ## Brief change log
   
 - *modify common_kubernetes.sh*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28303] Support LatestOffsetsInitializer to avoid latest-offset strategy lose data [flink-connector-kafka]

2023-10-16 Thread via GitHub


Tan-JiaLiang commented on PR #52:
URL: 
https://github.com/apache/flink-connector-kafka/pull/52#issuecomment-1765621807

   @mas-chen Hi Mason, thanks for your feedback.
   
   I think empty splits may not be the point in this problem. The point is we 
should not store the marker (i.e. -1) into state.
   
   Currently we handle the empty splits on TMs. If we handle on the enumerator:
   1. We need to store the UNASIGNED partition's offset into state in JM when 
checkpoint/savepoint.
   2. We need to consider compatibility with the history state.
   3. We need to reconsider the bounded mode and unbounded mode cases.
   
   I don't want to make too many changes because current connector seems still 
healthy, although it has some problem. I'd like to fix it instead of changing 
it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-3755) Introduce key groups for key-value state to support dynamic scaling

2023-10-16 Thread Jinzhong Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775998#comment-17775998
 ] 

Jinzhong Li commented on FLINK-3755:


[~trohrmann]  [~martijnvisser] 

I want to take a look at the design document[1], but i don't have access to it. 
Could you help copy the document to a public place, or give me access? 

 

[1] 
[https://docs.google.com/document/d/1G1OS1z3xEBOrYD4wSu-LuBCyPUWyFd9l3T9WyssQ63w/edit?usp=sharing]

> Introduce key groups for key-value state to support dynamic scaling
> ---
>
> Key: FLINK-3755
> URL: https://issues.apache.org/jira/browse/FLINK-3755
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
> Fix For: 1.2.0
>
>
> In order to support dynamic scaling, it is necessary to sub-partition the 
> key-value states of each operator. This sub-partitioning, which produces a 
> set of key groups, allows to easily scale in and out Flink jobs by simply 
> reassigning the different key groups to the new set of sub tasks. The idea of 
> key groups is described in this design document [1]. 
> [1] 
> https://docs.google.com/document/d/1G1OS1z3xEBOrYD4wSu-LuBCyPUWyFd9l3T9WyssQ63w/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33202][runtime] Support switching from batch to stream mode to improve throughput when processing backlog data [flink]

2023-10-16 Thread via GitHub


Sxnan commented on PR #23521:
URL: https://github.com/apache/flink/pull/23521#issuecomment-1765550224

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] FLINK-31180 Fail early when installing minikube and check whether we can retry [flink]

2023-10-16 Thread via GitHub


victor9309 commented on code in PR #23497:
URL: https://github.com/apache/flink/pull/23497#discussion_r1361434258


##
flink-end-to-end-tests/test-scripts/common_kubernetes.sh:
##
@@ -54,6 +54,17 @@ function setup_kubernetes_for_linux {
   chmod +x minikube && sudo mv minikube /usr/bin/minikube
 fi
 
+if ! [ -x "$(command -v minikube)" ]; then

Review Comment:
   Sorry, I misunderstood before, I understood to change another download 
address



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33184) HybridShuffleITCase fails with exception in resource cleanup of task Map on AZP

2023-10-16 Thread Xuannan Su (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775983#comment-17775983
 ] 

Xuannan Su commented on FLINK-33184:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53752=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=9031

> HybridShuffleITCase fails with exception in resource cleanup of task Map on 
> AZP
> ---
>
> Key: FLINK-33184
> URL: https://issues.apache.org/jira/browse/FLINK-33184
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build fails 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53548=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=8c9d126d-57d2-5a9e-a8c8-ff53f7b35cd9=8710
> {noformat} 
> Map (5/10)#0] ERROR org.apache.flink.runtime.taskmanager.Task 
>[] - FATAL - exception in resource cleanup of task Map (5/10)#0 
> (159f887fbd200ea7cfa4aaeb1127c4ab_0a448493b4782967b150582570326227_4_0)
> .
> java.lang.IllegalStateException: Leaking buffers.
> at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193) 
> ~[flink-core-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.storage.TieredStorageMemoryManagerImpl.release(TieredStorageMemoryManagerImpl.java:236)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_292]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.storage.TieredStorageResourceRegistry.clearResourceFor(TieredStorageResourceRegistry.java:59)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.shuffle.TieredResultPartition.releaseInternal(TieredResultPartition.java:195)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.release(ResultPartition.java:262)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartitionManager.releasePartition(ResultPartitionManager.java:88)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.fail(ResultPartition.java:284)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.taskmanager.Task.failAllResultPartitions(Task.java:1004)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.taskmanager.Task.releaseResources(Task.java:990) 
> ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:838) 
> [flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) 
> [flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
> 01:17:22,375 [flink-pekko.actor.default-dispatcher-5] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Task Sink: 
> Unnamed (3/10)#0 is already in state CANCELING
> 01:17:22,375 [Map (5/10)#0] ERROR 
> org.apache.flink.runtime.taskexecutor.TaskExecutor   [] - FATAL - 
> exception in resource cleanup of task Map (5/10)#0 
> (159f887fbd200ea7cfa4aaeb1127c4ab_0a448493b4782967b150582570326227_4_0)
> .
> java.lang.IllegalStateException: Leaking buffers.
> at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:193) 
> ~[flink-core-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.storage.TieredStorageMemoryManagerImpl.release(TieredStorageMemoryManagerImpl.java:236)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_292]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.storage.TieredStorageResourceRegistry.clearResourceFor(TieredStorageResourceRegistry.java:59)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.hybrid.tiered.shuffle.TieredResultPartition.releaseInternal(TieredResultPartition.java:195)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.release(ResultPartition.java:262)
>  ~[flink-runtime-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
> at 
> 

Re: [PR] [FLINK-28229][streaming-java] Source API alternatives for StreamExecutionEnvironment#fromCollection() methods [flink]

2023-10-16 Thread via GitHub


afedulov commented on PR #22850:
URL: https://github.com/apache/flink/pull/22850#issuecomment-1765388782

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33286) DataGeneratorSource should support automatic return type detection

2023-10-16 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov reassigned FLINK-33286:
-

Assignee: Alexander Fedulov

> DataGeneratorSource should support automatic return type detection
> --
>
> Key: FLINK-33286
> URL: https://issues.apache.org/jira/browse/FLINK-33286
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.17.1
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>
> Currently, DataGeneratorSource requires both GeneratorFunction and 
> TypeInformation to be passed during its construction. Given that the 
> generator function has a fixed API, it should be possible to reliably extract 
> the OUT type automatically for both lambda generator functions and for 
> objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33286) DataGeneratorSource should support automatic return type detection

2023-10-16 Thread Alexander Fedulov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775923#comment-17775923
 ] 

Alexander Fedulov commented on FLINK-33286:
---

Connectors / DataGen component is missing. If you read this comment and have 
the required permissions, please create.

> DataGeneratorSource should support automatic return type detection
> --
>
> Key: FLINK-33286
> URL: https://issues.apache.org/jira/browse/FLINK-33286
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.17.1
>Reporter: Alexander Fedulov
>Priority: Major
>
> Currently, DataGeneratorSource requires both GeneratorFunction and 
> TypeInformation to be passed during its construction. Given that the 
> generator function has a fixed API, it should be possible to reliably extract 
> the OUT type automatically for both lambda generator functions and for 
> objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33286) DataGeneratorSource should support automatic return type detection

2023-10-16 Thread Alexander Fedulov (Jira)
Alexander Fedulov created FLINK-33286:
-

 Summary: DataGeneratorSource should support automatic return type 
detection
 Key: FLINK-33286
 URL: https://issues.apache.org/jira/browse/FLINK-33286
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.17.1
Reporter: Alexander Fedulov


Currently, DataGeneratorSource requires both GeneratorFunction and 
TypeInformation to be passed during its construction. Given that the 
generator function has a fixed API, it should be possible to reliably extract 
the OUT type automatically for both lambda generator functions and for objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [AVRO-3536] [flink-avro] Union type not inheriting type conversions [flink]

2023-10-16 Thread via GitHub


m8719-github commented on PR #23524:
URL: https://github.com/apache/flink/pull/23524#issuecomment-1765180123

   I've reached out to Martin here: 
[FLINK-33238](https://issues.apache.org/jira/browse/FLINK-33238) to see if we 
can consolidate.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31631][FileSystems] shade guava in gs-fs filesystem [flink]

2023-10-16 Thread via GitHub


afedulov commented on PR #23489:
URL: https://github.com/apache/flink/pull/23489#issuecomment-1765166054

   @cnauroth flink bot does not add every run into the CI report list, so it 
probably did. The issue is with the licenses/NOTICE file:

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53579=logs=b59e5554-36c7-5512-ab1a-b80b74075fce=849d419c-1b8f-52b7-e455-d4bc36ec43ad=43214
   
   See https://cwiki.apache.org/confluence/display/FLINK/Licensing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] FLINK-31180 Fail early when installing minikube and check whether we can retry [flink]

2023-10-16 Thread via GitHub


afedulov commented on code in PR #23497:
URL: https://github.com/apache/flink/pull/23497#discussion_r1361179604


##
flink-end-to-end-tests/test-scripts/common_kubernetes.sh:
##
@@ -54,6 +54,17 @@ function setup_kubernetes_for_linux {
   chmod +x minikube && sudo mv minikube /usr/bin/minikube
 fi
 
+if ! [ -x "$(command -v minikube)" ]; then

Review Comment:
   What is the idea behind using 
`https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube` instead of 
`https://storage.googleapis.com/minikube/releases/` ?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] FLINK-31180 Fail early when installing minikube and check whether we can retry [flink]

2023-10-16 Thread via GitHub


afedulov commented on code in PR #23497:
URL: https://github.com/apache/flink/pull/23497#discussion_r1361179604


##
flink-end-to-end-tests/test-scripts/common_kubernetes.sh:
##
@@ -54,6 +54,17 @@ function setup_kubernetes_for_linux {
   chmod +x minikube && sudo mv minikube /usr/bin/minikube
 fi
 
+if ! [ -x "$(command -v minikube)" ]; then

Review Comment:
   What is the idea behind using 
`https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube` instead of 
`https://storage.googleapis.com/minikube/releases/` ?
   The PR description mentions a retry, but I cannot locate any.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [AVRO-3536] [flink-avro] Union type not inheriting type conversions [flink]

2023-10-16 Thread via GitHub


m8719-github commented on code in PR #23524:
URL: https://github.com/apache/flink/pull/23524#discussion_r1361173226


##
flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/RegistryAvroDeserializationSchemaTest.java:
##
@@ -85,7 +85,7 @@ void testSpecificRecordReadMoreFieldsThanWereWritten() throws 
IOException {
 + " \"fields\": [\n"
 + " {\"name\": \"name\", \"type\": 
\"string\"}"
 + " ]\n"
-+ "}]");
++ "}");

Review Comment:
   yes, looks like it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33218) First Steps - error when run with zsh

2023-10-16 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov reassigned FLINK-33218:
-

Assignee: Robin Moffatt

> First Steps - error when run with zsh
> -
>
> Key: FLINK-33218
> URL: https://issues.apache.org/jira/browse/FLINK-33218
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Robin Moffatt
>Assignee: Robin Moffatt
>Priority: Not a Priority
>  Labels: pull-request-available
>
> If a user of zsh (the default on MacOS) runs the literal command that's given 
> under "Browsing the project directory" they get an error: 
> {code:java}
> $ cd flink-* && ls -l
> cd: string not in pwd: flink-1.17.1
> {code}
>  
> This is because the behaviour of `cd` is different under zsh than bash and 
> the glob triggers this. I've written up [an 
> explanation|https://rmoff.net/2023/10/04/cd-string-not-in-pwd/] for those 
> interested.
> IMO the fix is to hardcode the version in the instructions. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33238][Formats/Avro] Upgrade used AVRO version to 1.11.3 [flink]

2023-10-16 Thread via GitHub


afedulov commented on PR #23508:
URL: https://github.com/apache/flink/pull/23508#issuecomment-1765132464

   @MartijnVisser head up for the test failure fix:
   https://github.com/apache/flink/pull/23524/files


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [AVRO-3536] [flink-avro] Union type not inheriting type conversions [flink]

2023-10-16 Thread via GitHub


afedulov commented on PR #23524:
URL: https://github.com/apache/flink/pull/23524#issuecomment-1765131440

   Actually, there is already an earlier PR with this bump to address 
vulnarabilities:
   https://github.com/apache/flink/pull/23508


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [AVRO-3536] [flink-avro] Union type not inheriting type conversions [flink]

2023-10-16 Thread via GitHub


afedulov commented on code in PR #23524:
URL: https://github.com/apache/flink/pull/23524#discussion_r1361168261


##
flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/RegistryAvroDeserializationSchemaTest.java:
##
@@ -85,7 +85,7 @@ void testSpecificRecordReadMoreFieldsThanWereWritten() throws 
IOException {
 + " \"fields\": [\n"
 + " {\"name\": \"name\", \"type\": 
\"string\"}"
 + " ]\n"
-+ "}]");
++ "}");

Review Comment:
   Is parsing more strict in the newer version? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [AVRO-3536] [flink-avro] Union type not inheriting type conversions [flink]

2023-10-16 Thread via GitHub


afedulov commented on PR #23524:
URL: https://github.com/apache/flink/pull/23524#issuecomment-1765124381

   @m8719-github thanks for the contribution. All Flink PRs need to have a 
corresponding issue in Flink JIRA. Could you please create one with a brief 
description of the issue being addressed?
   https://issues.apache.org/jira/projects/FLINK/issues/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33116][tests] Fix CliClientTest.testCancelExecutionInteractiveMode fails with NPE [flink]

2023-10-16 Thread via GitHub


Jiabao-Sun commented on code in PR #23515:
URL: https://github.com/apache/flink/pull/23515#discussion_r1361042492


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java:
##
@@ -300,6 +303,8 @@ void testCancelExecutionInteractiveMode() throws Exception {
 try {
 client.executeInInteractiveMode();
 } catch (Exception ignore) {
+} finally {

Review Comment:
   Thanks @XComp for the review. It makes sense to me.
   I found that `CheckedThread` can solve this problem as well. 
   Would you mind helping review it again?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30519) Add e2e tests for operator dynamic config

2023-10-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30519:
---
Labels: auto-deprioritized-critical pull-request-available stale-assigned 
starter  (was: auto-deprioritized-critical stale-assigned starter)

> Add e2e tests for operator dynamic config
> -
>
> Key: FLINK-30519
> URL: https://issues.apache.org/jira/browse/FLINK-30519
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-assigned, starter
>
> The dynamic config feature is currently not covered by e2e tests and is 
> subject to accidental regressions, as shown in:
> https://issues.apache.org/jira/browse/FLINK-30329
> We should add an e2e test that covers this



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-30519] add dynamic config e2e [flink-kubernetes-operator]

2023-10-16 Thread via GitHub


HuangZhenQiu opened a new pull request, #684:
URL: https://github.com/apache/flink-kubernetes-operator/pull/684

   ## What is the purpose of the change
   Add e2e for dynamic config feature of Flink Operator
   
   ## Brief change log
 - add test_dynamic_config.sh
   
   ## Verifying this change
   Run e2e in the GitHub CI.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
(no)
 - Core observer or reconciler logic that is regularly executed: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30400) Stop bundling connector-base in externalized connectors

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-30400:
---
Fix Version/s: aws-connector-4.2.0
   kafka-3.1.0

> Stop bundling connector-base in externalized connectors
> ---
>
> Key: FLINK-30400
> URL: https://issues.apache.org/jira/browse/FLINK-30400
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Chesnay Schepler
>Assignee: Hang Ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0, kafka-3.1.0
>
>
> Check that none of the externalized connectors bundle connector-base; if so 
> remove the bundling and schedule a new minor release.
> Bundling this module is highly problematic w.r.t. binary compatibility, since 
> bundled classes may rely on internal APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30400) Stop bundling connector-base in externalized connectors

2023-10-16 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775827#comment-17775827
 ] 

Martijn Visser commented on FLINK-30400:


Fixed in apache/flink-connector-kafka:main 
37cbb83f55e9d6f0b8dc35bb8da867086dfa4d9e

> Stop bundling connector-base in externalized connectors
> ---
>
> Key: FLINK-30400
> URL: https://issues.apache.org/jira/browse/FLINK-30400
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Chesnay Schepler
>Assignee: Hang Ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0, kafka-3.1.0
>
>
> Check that none of the externalized connectors bundle connector-base; if so 
> remove the bundling and schedule a new minor release.
> Bundling this module is highly problematic w.r.t. binary compatibility, since 
> bundled classes may rely on internal APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-30400][build] Stop bundling flink-connector-base [flink-connector-kafka]

2023-10-16 Thread via GitHub


MartijnVisser merged PR #50:
URL: https://github.com/apache/flink-connector-kafka/pull/50


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-30490) Deleted topic from KafkaSource is still included in subsequent restart from savepoint

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-30490.
--
Resolution: Duplicate

> Deleted topic from KafkaSource is still included in subsequent restart from 
> savepoint
> -
>
> Key: FLINK-30490
> URL: https://issues.apache.org/jira/browse/FLINK-30490
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Hilmi Al Fatih
>Priority: Minor
>
> It seems that KafkaSource does not handle the removed topic partitions after 
> restarting from offset. So far is still commented out as TODO. 
> ([ref|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumerator.java#:~:text=//%20TODO%3A%20Handle%20removed%20partitions.])
> I am wondering if there is an exact plan on when this will be done, what are 
> technical consideration, etc. Since I really need this feature, I would 
> really love to contribute under guidance from maintaner.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24045) KafkaTableITCase.testPerPartitionWatermarkKafka fails on azure

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-24045.
--
Resolution: Cannot Reproduce

> KafkaTableITCase.testPerPartitionWatermarkKafka fails on azure
> --
>
> Key: FLINK-24045
> URL: https://issues.apache.org/jira/browse/FLINK-24045
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23008=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7128
> {code}
> Aug 28 23:14:05 [ERROR] Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 88.767 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase
> Aug 28 23:14:05 [ERROR] testPerPartitionWatermarkKafka[format = json]  Time 
> elapsed: 6.571 s  <<< FAILURE!
> Aug 28 23:14:05 java.lang.AssertionError: expected:<[+I[0, 
> partition-0-name-0, 2020-03-08T13:12:11.123], +I[0, partition-0-name-1, 
> 2020-03-08T14:12:12.223], +I[0, partition-0-name-2, 2020-03-08T15:12:13.323], 
> +I[1, partition-1-name-0, 2020-03-09T13:13:11.123], +I[1, partition-1-name-1, 
> 2020-03-09T15:13:11.133], +I[1, partition-1-name-2, 2020-03-09T16:13:11.143], 
> +I[2, partition-2-name-0, 2020-03-10T13:12:14.123], +I[2, partition-2-name-1, 
> 2020-03-10T14:12:14.123], +I[2, partition-2-name-2, 2020-03-10T14:13:14.123], 
> +I[2, partition-2-name-3, 2020-03-10T14:14:14.123], +I[2, partition-2-name-4, 
> 2020-03-10T14:15:14.123], +I[2, partition-2-name-5, 2020-03-10T14:16:14.123], 
> +I[3, partition-3-name-0, 2020-03-11T17:12:11.123], +I[3, partition-3-name-1, 
> 2020-03-11T18:12:11.123]]> but was:<[+I[3, partition-3-name-0, 
> 2020-03-11T17:12:11.123], +I[3, partition-3-name-1, 2020-03-11T18:12:11.123]]>
> Aug 28 23:14:05   at org.junit.Assert.fail(Assert.java:89)
> Aug 28 23:14:05   at org.junit.Assert.failNotEquals(Assert.java:835)
> Aug 28 23:14:05   at org.junit.Assert.assertEquals(Assert.java:120)
> Aug 28 23:14:05   at org.junit.Assert.assertEquals(Assert.java:146)
> Aug 28 23:14:05   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.waitingExpectedResults(KafkaTableTestUtils.java:93)
> Aug 28 23:14:05   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testPerPartitionWatermarkKafka(KafkaTableITCase.java:739)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24045) KafkaTableITCase.testPerPartitionWatermarkKafka fails on azure

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-24045:
---
Fix Version/s: (was: 1.15.5)

> KafkaTableITCase.testPerPartitionWatermarkKafka fails on azure
> --
>
> Key: FLINK-24045
> URL: https://issues.apache.org/jira/browse/FLINK-24045
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23008=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7128
> {code}
> Aug 28 23:14:05 [ERROR] Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 88.767 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase
> Aug 28 23:14:05 [ERROR] testPerPartitionWatermarkKafka[format = json]  Time 
> elapsed: 6.571 s  <<< FAILURE!
> Aug 28 23:14:05 java.lang.AssertionError: expected:<[+I[0, 
> partition-0-name-0, 2020-03-08T13:12:11.123], +I[0, partition-0-name-1, 
> 2020-03-08T14:12:12.223], +I[0, partition-0-name-2, 2020-03-08T15:12:13.323], 
> +I[1, partition-1-name-0, 2020-03-09T13:13:11.123], +I[1, partition-1-name-1, 
> 2020-03-09T15:13:11.133], +I[1, partition-1-name-2, 2020-03-09T16:13:11.143], 
> +I[2, partition-2-name-0, 2020-03-10T13:12:14.123], +I[2, partition-2-name-1, 
> 2020-03-10T14:12:14.123], +I[2, partition-2-name-2, 2020-03-10T14:13:14.123], 
> +I[2, partition-2-name-3, 2020-03-10T14:14:14.123], +I[2, partition-2-name-4, 
> 2020-03-10T14:15:14.123], +I[2, partition-2-name-5, 2020-03-10T14:16:14.123], 
> +I[3, partition-3-name-0, 2020-03-11T17:12:11.123], +I[3, partition-3-name-1, 
> 2020-03-11T18:12:11.123]]> but was:<[+I[3, partition-3-name-0, 
> 2020-03-11T17:12:11.123], +I[3, partition-3-name-1, 2020-03-11T18:12:11.123]]>
> Aug 28 23:14:05   at org.junit.Assert.fail(Assert.java:89)
> Aug 28 23:14:05   at org.junit.Assert.failNotEquals(Assert.java:835)
> Aug 28 23:14:05   at org.junit.Assert.assertEquals(Assert.java:120)
> Aug 28 23:14:05   at org.junit.Assert.assertEquals(Assert.java:146)
> Aug 28 23:14:05   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.waitingExpectedResults(KafkaTableTestUtils.java:93)
> Aug 28 23:14:05   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testPerPartitionWatermarkKafka(KafkaTableITCase.java:739)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25498) FlinkKafkaProducerITCase. testRestoreToCheckpointAfterExceedingProducersPool failed on azure

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25498.
--
Resolution: Cannot Reproduce

> FlinkKafkaProducerITCase. testRestoreToCheckpointAfterExceedingProducersPool 
> failed on azure
> 
>
> Key: FLINK-25498
> URL: https://issues.apache.org/jira/browse/FLINK-25498
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.2
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2021-12-31T08:14:15.9176809Z Dec 31 08:14:15 java.lang.AssertionError: 
> Expected elements: <[42]>, but was: elements: <[42, 42, 42, 42]>
> 2021-12-31T08:14:15.9177351Z Dec 31 08:14:15  at 
> org.junit.Assert.fail(Assert.java:89)
> 2021-12-31T08:14:15.9177963Z Dec 31 08:14:15  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:331)
> 2021-12-31T08:14:15.9183636Z Dec 31 08:14:15  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRestoreToCheckpointAfterExceedingProducersPool(FlinkKafkaProducerITCase.java:159)
> 2021-12-31T08:14:15.9184980Z Dec 31 08:14:15  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-12-31T08:14:15.9185908Z Dec 31 08:14:15  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-12-31T08:14:15.9186954Z Dec 31 08:14:15  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-12-31T08:14:15.9187888Z Dec 31 08:14:15  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-12-31T08:14:15.9188819Z Dec 31 08:14:15  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-12-31T08:14:15.9189826Z Dec 31 08:14:15  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-12-31T08:14:15.9191068Z Dec 31 08:14:15  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-12-31T08:14:15.9191923Z Dec 31 08:14:15  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-12-31T08:14:15.9193167Z Dec 31 08:14:15  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-12-31T08:14:15.9193891Z Dec 31 08:14:15  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
> 2021-12-31T08:14:15.9194516Z Dec 31 08:14:15  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-12-31T08:14:15.9195078Z Dec 31 08:14:15  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-12-31T08:14:15.9195616Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-12-31T08:14:15.9196194Z Dec 31 08:14:15  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-12-31T08:14:15.9196762Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-12-31T08:14:15.9197389Z Dec 31 08:14:15  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-12-31T08:14:15.9197988Z Dec 31 08:14:15  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-12-31T08:14:15.9198818Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-12-31T08:14:15.9199544Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-12-31T08:14:15.9200367Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2021-12-31T08:14:15.9200914Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2021-12-31T08:14:15.9201465Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2021-12-31T08:14:15.9202369Z Dec 31 08:14:15  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-12-31T08:14:15.9203399Z Dec 31 08:14:15  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2021-12-31T08:14:15.9203973Z Dec 31 08:14:15  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2021-12-31T08:14:15.9204504Z Dec 31 08:14:15  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-12-31T08:14:15.9205027Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-12-31T08:14:15.9205549Z Dec 31 08:14:15  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2021-12-31T08:14:15.9206053Z Dec 31 08:14:15  at 
> 

Re: [PR] [FLINK-33097][autoscaler] Initialize the generic autoscaler module and interfaces [flink-kubernetes-operator]

2023-10-16 Thread via GitHub


mateczagany commented on code in PR #677:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/677#discussion_r1360856610


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java:
##
@@ -441,15 +442,21 @@ protected Collection 
queryAggregatedMetricNames(
 
 protected abstract Map>
 queryAllAggregatedMetrics(
-AbstractFlinkResource cr,
-FlinkService flinkService,
-Configuration conf,
+Context ctx,
 Map> 
filteredVertexMetricNames);
 
-public void cleanup(AbstractFlinkResource cr) {
-var resourceId = ResourceID.fromResource(cr);
-histories.remove(resourceId);
-availableVertexMetricNames.remove(resourceId);
+public JobDetailsInfo getJobDetailsInfo(
+JobAutoScalerContext context, Duration clientTimeout) throws 
Exception {

Review Comment:
   That makes sense, I think you've explained it very well, thank you! 
   
   I still think that the naming and the abstraction can be a bit confusing for 
a fresh pair of eyes, but as you've said and as I've said in my original 
comment, this is not related to your PR, so I think it's perfectly fine to 
leave it as is.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33283) core stage: WebFrontendBootstrapTest.testHandlersMustBeLoaded

2023-10-16 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775794#comment-17775794
 ] 

Matthias Pohl commented on FLINK-33283:
---

hm, but it's also not built in the Azure CI runs (see [this 
one|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53758=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=3998])
 but the tests are succeeding. We also have two Github Action runs (e.g. [this 
one|https://github.com/XComp/flink/actions/runs/6529754573/job/1772854]) 
where {{core}} didn't fail.

> core stage: WebFrontendBootstrapTest.testHandlersMustBeLoaded
> -
>
> Key: FLINK-33283
> URL: https://issues.apache.org/jira/browse/FLINK-33283
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://github.com/XComp/flink/actions/runs/6525957588/job/17719339666#step:10:12279]
> {code:java}
>  Error: 20:06:13 20:06:13.132 [ERROR] 
> org.apache.flink.runtime.webmonitor.utils.WebFrontendBootstrapTest.testHandlersMustBeLoaded
>   Time elapsed: 2.298 s  <<< FAILURE!
> 12279Oct 15 20:06:13 org.opentest4j.AssertionFailedError: 
> 12280Oct 15 20:06:13 
> 12281Oct 15 20:06:13 expected: 404
> 12282Oct 15 20:06:13  but was: 200
> 12283Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> 12284Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 12285Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 12286Oct 15 20:06:13  at 
> org.apache.flink.runtime.webmonitor.utils.WebFrontendBootstrapTest.testHandlersMustBeLoaded(WebFrontendBootstrapTest.java:89)
> 12287Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33116][tests] Fix CliClientTest.testCancelExecutionInteractiveMode fails with NPE [flink]

2023-10-16 Thread via GitHub


XComp commented on code in PR #23515:
URL: https://github.com/apache/flink/pull/23515#discussion_r1360500601


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java:
##
@@ -311,6 +316,8 @@ void testCancelExecutionInteractiveMode() throws Exception {
 terminal.raise(Terminal.Signal.INT);
 CommonTestUtils.waitUntilCondition(
 () -> 
outputStream.toString().contains(CliStrings.MESSAGE_HELP));
+// Prevent NPE when closing the terminal. See FLINK-33116 for more 
information.
+closedLatch.await(30, TimeUnit.SECONDS);

Review Comment:
   We don't want to use timeouts in the tests anymore. The reason is that if we 
run into a deadlock, the CI tools would print a thread dump at the end which 
would reveal more information on where the deadlock happened. Providing a 
timeout in the test would result in an early failure without any thread dump 
([Flink's Coding 
Guidelines](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#avoid-timeouts-in-junit-tests)
 for reference).



##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java:
##
@@ -300,6 +303,8 @@ void testCancelExecutionInteractiveMode() throws Exception {
 try {
 client.executeInInteractiveMode();
 } catch (Exception ignore) {
+} finally {

Review Comment:
   I'm wondering whether we should use `CompletableFuture` here, instead. That 
way, we could collect any `RuntimeException` (which would be otherwise 
swallowed by the concurrent execution). WDYT? :thinking: 



##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java:
##
@@ -290,6 +292,7 @@ void testCancelExecutionInteractiveMode() throws Exception {
 Path historyFilePath = historyTempFile();
 InputStream inputStream = new ByteArrayInputStream("SELECT 1;\nHELP;\n 
".getBytes());
 OutputStream outputStream = new ByteArrayOutputStream(248);
+CountDownLatch closedLatch = new CountDownLatch(1);

Review Comment:
   nit: for this, `OneShotLatch` is available. But `CountDownLatch(1)` does the 
same ¯\_(ツ)_/¯



##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java:
##
@@ -290,6 +292,7 @@ void testCancelExecutionInteractiveMode() throws Exception {
 Path historyFilePath = historyTempFile();
 InputStream inputStream = new ByteArrayInputStream("SELECT 1;\nHELP;\n 
".getBytes());
 OutputStream outputStream = new ByteArrayOutputStream(248);
+CountDownLatch closedLatch = new CountDownLatch(1);

Review Comment:
   nit: for this, `OneShotLatch` is available. But `CountDownLatch(1)` does the 
same ¯\_(ツ)_/¯



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-16 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1360731620


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/scalingpolicy/EnforceParallelismChangeRescalingControllerTest.java:
##
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.adaptive.scalingpolicy;
+
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.scheduler.adaptive.allocator.VertexParallelism;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+import java.util.Collections;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for the {@link RescalingController}. */
+@ExtendWith(TestLoggerExtension.class)
+public class EnforceParallelismChangeRescalingControllerTest {

Review Comment:
   done, thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-16 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1360730445


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/scalingpolicy/EnforceParallelismChangeRescalingControllerTest.java:
##
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.adaptive.scalingpolicy;
+
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.scheduler.adaptive.allocator.VertexParallelism;
+import org.apache.flink.util.TestLoggerExtension;
+
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.ExtendWith;
+
+import java.util.Collections;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for the {@link RescalingController}. */
+@ExtendWith(TestLoggerExtension.class)

Review Comment:
   done, thanks



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-16 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1360728058


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java:
##
@@ -594,8 +683,12 @@ public FailureResult howToHandleFailure(Throwable failure) 
{
 }
 
 @Override
-public boolean shouldRescale(ExecutionGraph executionGraph) {
-return canScaleUp.get();
+public boolean shouldRescale(ExecutionGraph executionGraph, boolean 
forceRescale) {
+if (forceRescale) {
+return canScaleUpWithForce;
+} else {
+return canScaleUpWithoutForce;

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-16 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1360721374


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java:
##
@@ -594,8 +683,12 @@ public FailureResult howToHandleFailure(Throwable failure) 
{
 }
 
 @Override
-public boolean shouldRescale(ExecutionGraph executionGraph) {
-return canScaleUp.get();
+public boolean shouldRescale(ExecutionGraph executionGraph, boolean 
forceRescale) {
+if (forceRescale) {
+return canScaleUpWithForce;
+} else {
+return canScaleUpWithoutForce;

Review Comment:
   I was trying to stick to nullable as in the production code and also because 
in some tests we don't care about forcing value. But yes I can definitely move 
to boolean instead of Boolean to avoid NPE when unboxing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33181] Allow a table definition can be used to read & write data to. [flink-connector-aws]

2023-10-16 Thread via GitHub


vtkhanh commented on PR #105:
URL: 
https://github.com/apache/flink-connector-aws/pull/105#issuecomment-1764483073

   > Actions are failing due to a recent bug, will fix
   
   Merged your fix and updated the PR. Thanks for looking @dannycranmer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33181] Allow a table definition can be used to read & write data to. [flink-connector-aws]

2023-10-16 Thread via GitHub


vtkhanh commented on PR #105:
URL: 
https://github.com/apache/flink-connector-aws/pull/105#issuecomment-1764481474

   > Hi @vtkhanh, Could you please describe the use case more. It feels like an 
anti-pattern to use the same stream as source and sink. Kinesis Table API 
source and sink implementations were intentionally separated post 1.15.
   
   I dont have a concrete usecase in production which a stream is used for both 
read from and write to within a Flink job. But we can find it useful in testing 
when we want to populate random data into a stream, and then read from it, by 
using one table definition, instead of 2 separate ones.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33217) Flink SQL: UNNEST fails with on LEFT JOIN with NOT NULL type in array

2023-10-16 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775748#comment-17775748
 ] 

Sergey Nuyanzin commented on FLINK-33217:
-

after looking a bit deeper it looks like it is not Calcite issue
e.g.

this query

{code:sql}
with Book as (
   SELECT *
 FROM
  (
VALUES
  ROW (array[1, 2])
, ROW (array[11])
, ROW (array[22])
)  Book (authorId)
)  select * from Book b left join unnest(b.authorId) on true
{codei
works ok with Calcite while is failing on Flink with similar issue.

I think the reason is that  In Calcite there is a dedicated {{UNNEST}} operator 
[1] which for some reason is not used in Flink... Instead there is 
LogicalUnnestRule [2] which tries to translates result of unnest as a table 
scan and this is the place where the error happens... 
Based on the code of this rule 
{code:scala}
  relNode match {
case rs: HepRelVertex =>
  convert(getRel(rs))

  case f: LogicalProject =>
   ...
  case f: LogicalFilter =>
   ...

  case uc: Uncollect =>
  ...
}
{code}
there could be  4 different types of cases failing with same or similar error 
while join unnest.

Current thoughts about how to fix it 
1. Move to Calcite's Unnest operator (however that's still not clear what was 
the reason to not use it...)
2. Since while parsing and building AST and while also convertion Calcite 
converts {{LEFT JOIN}} to something that has nullable type pn the left and this 
is also the reason, we could add convertion to not do it for {{LEFT JOIN 
UNNEST}}
3. We could try handling this in {{LogicalUnnestRule}} by making types broader 
like force nullables... however it could lead wrong final types (e.g. nullable 
instead of not nullable)

[1] 
https://github.com/apache/calcite/blob/bf56743554ea27d250d41db2eb83806f9e626b55/core/src/main/java/org/apache/calcite/sql/SqlUnnestOperator.java#L34
[2] 
https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalUnnestRule.scala#L52

> Flink SQL: UNNEST fails with on LEFT JOIN with NOT NULL type in array
> -
>
> Key: FLINK-33217
> URL: https://issues.apache.org/jira/browse/FLINK-33217
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.3, 1.18.0, 1.19.0
>Reporter: Robert Metzger
>Priority: Major
> Attachments: UnnestNullErrorTest.scala
>
>
> Steps to reproduce:
> Take a column of type 
> {code:java}
> business_data ARRAY
> {code}
> Take this query
> {code:java}
> select bd_name from reproduce_unnest LEFT JOIN 
> UNNEST(reproduce_unnest.business_data) AS exploded_bd(bd_name) ON true
> {code}
> And get this error
> {code:java}
> Caused by: java.lang.AssertionError: Type mismatch:
> rowtype of rel before registration: RecordType(VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" NOT NULL ARRAY business_data, VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" bd_name) NOT NULL
> rowtype of rel after registration: RecordType(VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" NOT NULL ARRAY business_data, VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" NOT NULL f0) NOT NULL
> Difference:
> bd_name: VARCHAR(2147483647) CHARACTER SET "UTF-16LE" -> VARCHAR(2147483647) 
> CHARACTER SET "UTF-16LE" NOT NULL
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:32)
>   at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:2206)
>   at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:275)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1270)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:598)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:613)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:498)
>   at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:315)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:62)
> {code}
> I have implemented a small test case, which fails against Flink 1.15, 1.8 and 
> the latest master branch.
> Workarounds:
> 1. Drop "NOT NULL" in array type
> 2. Drop "LEFT" from "LEFT JOIN".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33217) Flink SQL: UNNEST fails with on LEFT JOIN with NOT NULL type in array

2023-10-16 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775748#comment-17775748
 ] 

Sergey Nuyanzin edited comment on FLINK-33217 at 10/16/23 1:01 PM:
---

after looking a bit deeper it looks like it is not Calcite issue
e.g.

this query

{code:sql}
with Book as (
   SELECT *
 FROM
  (
VALUES
  ROW (array[1, 2])
, ROW (array[11])
, ROW (array[22])
)  Book (authorId)
)  select * from Book b left join unnest(b.authorId) on true
{code}
works ok with Calcite while is failing on Flink with similar issue.

I think the reason is that  In Calcite there is a dedicated {{UNNEST}} operator 
[1] which for some reason is not used in Flink... Instead there is 
LogicalUnnestRule [2] which tries to translates result of unnest as a table 
scan and this is the place where the error happens... 
Based on the code of this rule 
{code:scala}
  relNode match {
case rs: HepRelVertex =>
  convert(getRel(rs))

  case f: LogicalProject =>
   ...
  case f: LogicalFilter =>
   ...

  case uc: Uncollect =>
  ...
}
{code}
there could be  4 different types of cases failing with same or similar error 
while join unnest.

Current thoughts about how to fix it 
1. Move to Calcite's Unnest operator (however that's still not clear what was 
the reason to not use it...)
2. Since while parsing and building AST and while also convertion Calcite 
converts {{LEFT JOIN}} to something that has nullable type pn the left and this 
is also the reason, we could add convertion to not do it for {{LEFT JOIN 
UNNEST}}
3. We could try handling this in {{LogicalUnnestRule}} by making types broader 
like force nullables... however it could lead wrong final types (e.g. nullable 
instead of not nullable)

[1] 
https://github.com/apache/calcite/blob/bf56743554ea27d250d41db2eb83806f9e626b55/core/src/main/java/org/apache/calcite/sql/SqlUnnestOperator.java#L34
[2] 
https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalUnnestRule.scala#L52


was (Author: sergey nuyanzin):
after looking a bit deeper it looks like it is not Calcite issue
e.g.

this query

{code:sql}
with Book as (
   SELECT *
 FROM
  (
VALUES
  ROW (array[1, 2])
, ROW (array[11])
, ROW (array[22])
)  Book (authorId)
)  select * from Book b left join unnest(b.authorId) on true
{codei
works ok with Calcite while is failing on Flink with similar issue.

I think the reason is that  In Calcite there is a dedicated {{UNNEST}} operator 
[1] which for some reason is not used in Flink... Instead there is 
LogicalUnnestRule [2] which tries to translates result of unnest as a table 
scan and this is the place where the error happens... 
Based on the code of this rule 
{code:scala}
  relNode match {
case rs: HepRelVertex =>
  convert(getRel(rs))

  case f: LogicalProject =>
   ...
  case f: LogicalFilter =>
   ...

  case uc: Uncollect =>
  ...
}
{code}
there could be  4 different types of cases failing with same or similar error 
while join unnest.

Current thoughts about how to fix it 
1. Move to Calcite's Unnest operator (however that's still not clear what was 
the reason to not use it...)
2. Since while parsing and building AST and while also convertion Calcite 
converts {{LEFT JOIN}} to something that has nullable type pn the left and this 
is also the reason, we could add convertion to not do it for {{LEFT JOIN 
UNNEST}}
3. We could try handling this in {{LogicalUnnestRule}} by making types broader 
like force nullables... however it could lead wrong final types (e.g. nullable 
instead of not nullable)

[1] 
https://github.com/apache/calcite/blob/bf56743554ea27d250d41db2eb83806f9e626b55/core/src/main/java/org/apache/calcite/sql/SqlUnnestOperator.java#L34
[2] 
https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalUnnestRule.scala#L52

> Flink SQL: UNNEST fails with on LEFT JOIN with NOT NULL type in array
> -
>
> Key: FLINK-33217
> URL: https://issues.apache.org/jira/browse/FLINK-33217
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.3, 1.18.0, 1.19.0
>Reporter: Robert Metzger
>Priority: Major
> Attachments: UnnestNullErrorTest.scala
>
>
> Steps to reproduce:
> Take a column of type 
> {code:java}
> business_data ARRAY
> {code}
> 

[jira] [Commented] (FLINK-33230) Support Expanding ExecutionGraph to StreamGraph in Web UI

2023-10-16 Thread Yu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775743#comment-17775743
 ] 

Yu Chen commented on FLINK-33230:
-

Hi [~JunRuiLi] ,

Sure, I'll illustrate the details of the implementation as a FLIP and create 
the discussion in the dev mailing group.

> Support Expanding ExecutionGraph to StreamGraph in Web UI
> -
>
> Key: FLINK-33230
> URL: https://issues.apache.org/jira/browse/FLINK-33230
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: Yu Chen
>Assignee: Yu Chen
>Priority: Major
> Attachments: image-2023-10-10-18-52-38-252.png
>
>
> Flink Web shows users the ExecutionGraph (i.e., chained operators), but in 
> some cases, we would like to know the structure of the chained operators as 
> well as the necessary metrics such as the inputs and outputs of data, etc.
>  
> Thus, we propose to show the stream graphs and some related metrics such as 
> numberRecordInand numberRecordOut on the Flink Web (As shown in the Figure).
>  
> !image-2023-10-10-18-52-38-252.png|width=750,height=263!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25582) flink sql kafka source cannot special custom parallelism

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25582.
--
Resolution: Duplicate

> flink sql kafka source cannot special custom parallelism
> 
>
> Key: FLINK-25582
> URL: https://issues.apache.org/jira/browse/FLINK-25582
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.13.0, 1.14.0
>Reporter: venn wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> when use flink sql api, all operator have same parallelism, but in some times 
> we want specify the source / sink parallelism for kafka source, i noticed the 
> kafka sink already have parameter "sink.parallelism" to specify the sink 
> parallelism, but kafka source no, so we want flink sql api, have a parameter 
> to specify the kafka source parallelism like sink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32210) Error import Pyflink.table.descriptors due to python3.10 version mismatch

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-32210.
--
Resolution: Cannot Reproduce

> Error import Pyflink.table.descriptors due to python3.10 version mismatch
> -
>
> Key: FLINK-32210
> URL: https://issues.apache.org/jira/browse/FLINK-32210
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / Kafka, Connectors / MongoDB
>Affects Versions: 1.13.0
>Reporter: Alireza Omidvar
>Priority: Major
>
> Following to the issue[jira] [Created] 
> https://issues.apache.org/jira/browse/FLINK-32207(FLINK-32206) I decided to 
> install latest 1.13 version where Kafka and Json imports are working which 
> needed to create env python 3.8. I faced a few issues
>  
> 1: CommandNotFoundError: Your shell has not been properly configured to
>  
> use 'conda activate'. To initialize your shell, run $ conda init
>  Currently supported shells are: - bash - fish - tcsh -
> xonsh - zsh - powershell See 'conda init --help' for more information
> and options. IMPORTANT: You may need to close and restart your shell
> after running 'conda init'.
>  
> 2. I tried to initiate but the new error faced 
>  
> No Pyflink module found



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31740) Allow setting boundedness for upsert-kafka SQL connector

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31740.
--
Resolution: Fixed

> Allow setting boundedness for upsert-kafka SQL connector
> 
>
> Key: FLINK-31740
> URL: https://issues.apache.org/jira/browse/FLINK-31740
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: kafka-3.1.0
>
>
> With FLINK-24456, we added boundedness options for streaming mode to the SQL 
> Kafka Connector. This was mostly just an exposure of existing functionality 
> that was already available at the DataStream API level.
> We should do the same for the SQL Upsert Kafka Connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32822) Add connector option to control whether to enable auto-commit of offsets when checkpoints is enabled

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-32822.
--
Resolution: Won't Fix

I'm +1 with [~mason6345] on this, this is by design and we shouldn't want to 
change this. 

> Add connector option to control whether to enable auto-commit of offsets when 
> checkpoints is enabled
> 
>
> Key: FLINK-32822
> URL: https://issues.apache.org/jira/browse/FLINK-32822
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Zhanghao Chen
>Priority: Major
>
> When checkpointing is enabled, Flink Kafka connector commits the current 
> consuming offset when checkpoints are *completed* although Kafka source does 
> *NOT* rely on committed offsets for fault tolerance. When the checkpoint 
> interval is long, the lag curve will behave in a zig-zag way: the lag will 
> keep increasing, and suddenly drops on a complete checkpoint. It have led to 
> some confusion for users as in 
> [https://stackoverflow.com/questions/76419633/flink-kafka-source-commit-offset-to-error-offset-suddenly-increase-or-decrease]
>  and may also affect external monitoring for setting up alarms (you'll have 
> to set up with a high threshold due to the non-realtime commit of offsets) 
> and autoscaling (the algorithm would need to pay extra effort to distinguish 
> whether the backlog is actually growing or just because the checkpoint is not 
> completed yet).
> Therefore, I think it is worthwhile to add an option to enable auto-commit of 
> offsets when checkpoints is enabled. For DataStream API, it will be adding a 
> configuration method. For Table API, it will be adding a new connector option 
> which wires to the DataStream API configuration underneath.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32138) SQLClientSchemaRegistryITCase fails with timeout on AZP

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-32138.
--
Resolution: Cannot Reproduce

> SQLClientSchemaRegistryITCase fails with timeout on AZP
> ---
>
> Key: FLINK-32138
> URL: https://issues.apache.org/jira/browse/FLINK-32138
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.2
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49174=logs=6e8542d7-de38-5a33-4aca-458d6c87066d=10d6732b-d79a-5c68-62a5-668516de5313=15753
> {{SQLClientSchemaRegistryITCase}} fails on AZP as
> {noformat}
> May 20 03:41:34 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase  Time 
> elapsed: 600.05 s  <<< ERROR!
> May 20 03:41:34 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 minutes
> May 20 03:41:34   at 
> java.base@11.0.19/jdk.internal.misc.Unsafe.park(Native Method)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:232)
> May 20 03:41:34   at 
> app//com.github.dockerjava.api.async.ResultCallbackTemplate.awaitCompletion(ResultCallbackTemplate.java:91)
> May 20 03:41:34   at 
> app//org.testcontainers.images.TimeLimitedLoggedPullImageResultCallback.awaitCompletion(TimeLimitedLoggedPullImageResultCallback.java:52)
> May 20 03:41:34   at 
> app//org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:89)
> May 20 03:41:34   at 
> app//org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:28)
> May 20 03:41:34   at 
> app//org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
> May 20 03:41:34   at 
> app//org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1330)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:640)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:335)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.start(GenericContainer.java:326)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1063)
> May 20 03:41:34   at 
> app//org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> May 20 03:41:34   at 
> app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> May 20 03:41:34   at 
> app//org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> May 20 03:41:34   at 
> java.base@11.0.19/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> May 20 03:41:34   at 
> java.base@11.0.19/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32206) ModuleNotFoundError for Pyflink.table.descriptors wheel version mismatch

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-32206.
--
Resolution: Information Provided

> ModuleNotFoundError for Pyflink.table.descriptors wheel version mismatch
> 
>
> Key: FLINK-32206
> URL: https://issues.apache.org/jira/browse/FLINK-32206
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / Kafka, Connectors / MongoDB
>Affects Versions: 1.17.0
>Reporter: Alireza Omidvar
>Priority: Major
> Attachments: image (1).png, image (2).png
>
>
> Gentlemen,
> I have problem with some apache-flink modules. I am running a 1.17.0 apache- 
> flink and I write test codes in Colab I faced a problem on importing Kafka, 
> Json and FileSystem  modules 
>  
> from pyflink.table.descriptors import Schema, Kafka, Json, Rowtime 
> from pyflink.table.catalog import FileSystem 
>  
> not working for me (python version 3.10) 
>  
> Any help is highly appreciated the strange is that other modules importing 
> fine.  I checked with your Github but didn't find these on official version 
> too which means modules are not inside the descriptor.py in newer version. 
>  
> Please see the link below: 
>  
> [https://github.com/aomidvar/scrapper-price-comparison/blob/d8a10f74101bf96974e769813c33b83d7a71f02b/kafkaconsumer1.ipynb]
>  
>  
> I am running a test after producing the stream 
> ([https://github.com/aomidvar/scrapper-price-comparison/blob/main/kafkaproducer1.ipynb])
>  to Confluent server and I like to do a flink job but the above mentioned 
> modules are not found with the following links in collab:
>  
> That is probably an easy fix bug. Only version of apache-flink now working on 
> colab is 1.17.0. I prefer 3.10 but installed a virtual python 3.8 env and 
> between different modules found out that Kafka and Json modules are not in 
> descriptors.py of version 1.17 Apache-flink default. But modules exist in 
> Apache-flink 1.13 version.
> [https://colab.research.google.com/drive/1aHKv8WA6RA10zTdwdzUubB5K0anEmOws?usp=sharing]
> [https://colab.research.google.com/drive/1eCHJlsb8AjdmJtPc95X3H4btmFVSoCL4?usp=sharing]
>  
> I've got this error for Json, Kafka ...
> ---
>  
>  ImportError Traceback (most recent call last)  
> in () 1 from pyflink.table import DataTypes > 2 from 
> pyflink.table.descriptors import Schema, Kafka, Json, Rowtime 3 from 
> pyflink.table.catalog import FileSystem ImportError: cannot import name 
> 'Kafka' from 'pyflink.table.descriptors' 
> (/usr/local/lib/python3.10/dist-packages/pyflink/table/descriptors.py) 
>  
> ---
>  
>  NOTE: If your import is failing due to a missing package, you can manually 
> install dependencies using either !pip or !apt. To view examples of 
> installing some common dependencies, click the "Open Examples" button below.
>  
>  ---
>  
> I have doubt that if current error is related to a version and dependencies 
> then 
>  
> I have to ask the developer if I do this python 3.8 env is that possible to 
> get solved?
>  
>  
> Thanks for your time ,
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31862) KafkaSinkITCase.testStartFromSavepoint is unstable

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31862.
--
Resolution: Cannot Reproduce

> KafkaSinkITCase.testStartFromSavepoint is unstable
> --
>
> Key: FLINK-31862
> URL: https://issues.apache.org/jira/browse/FLINK-31862
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Sergey Nuyanzin
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48243=logs=9c5a5fe6-2f39-545e-1630-feb3d8d0a1ba=99b23320-1d05-5741-d63f-9e78473da39e=36611
> {noformat}
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> Apr 19 01:42:20   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Apr 19 01:42:20   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Apr 19 01:42:20   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
> Apr 19 01:42:20   at 
> org.apache.flink.connector.kafka.sink.testutils.KafkaSinkExternalContext.createTopic(KafkaSinkExternalContext.java:101)
> Apr 19 01:42:20   ... 111 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31342) SQLClientSchemaRegistryITCase timed out when starting the test container

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31342.
--
Resolution: Cannot Reproduce

> SQLClientSchemaRegistryITCase timed out when starting the test container
> 
>
> Key: FLINK-31342
> URL: https://issues.apache.org/jira/browse/FLINK-31342
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46820=logs=fb37c667-81b7-5c22-dd91-846535e99a97=39a035c3-c65e-573c-fb66-104c66c28912=11767
> {code}
> Mar 06 06:53:47 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 1, 
> Time elapsed: 1,037.927 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Mar 06 06:53:47 [ERROR] 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase  Time 
> elapsed: 1,037.927 s  <<< ERROR!
> Mar 06 06:53:47 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 minutes
> Mar 06 06:53:47   at sun.misc.Unsafe.park(Native Method)
> Mar 06 06:53:47   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Mar 06 06:53:47   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Mar 06 06:53:47   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Mar 06 06:53:47   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Mar 06 06:53:47   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Mar 06 06:53:47   at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:323)
> Mar 06 06:53:47   at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1063)
> Mar 06 06:53:47   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> Mar 06 06:53:47   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Mar 06 06:53:47   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Mar 06 06:53:47   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Mar 06 06:53:47   at java.lang.Thread.run(Thread.java:750)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31756) KafkaTableITCase.testStartFromGroupOffsetsNone fails due to UnknownTopicOrPartitionException

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31756.
--
Resolution: Cannot Reproduce

> KafkaTableITCase.testStartFromGroupOffsetsNone fails due to 
> UnknownTopicOrPartitionException
> 
>
> Key: FLINK-31756
> URL: https://issues.apache.org/jira/browse/FLINK-31756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> The following build fails with {{UnknownTopicOrPartitionException}}
> {noformat}
> Dec 03 01:10:59 Multiple Failures (1 failure)
> Dec 03 01:10:59 -- failure 1 --
> Dec 03 01:10:59 [Any cause is instance of class 'class 
> org.apache.kafka.clients.consumer.NoOffsetForPartitionException'] 
> Dec 03 01:10:59 Expecting any element of:
> Dec 03 01:10:59   [java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
> Dec 03 01:10:59   ...(64 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> Dec 03 01:10:59 java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> Dec 03 01:10:59   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> Dec 03 01:10:59   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> Dec 03 01:10:59   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
> Dec 03 01:10:59   ...(67 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed),
> Dec 03 01:10:59 org.apache.kafka.common.errors.TimeoutException: The 
> request timed out.
> Dec 03 01:10:59 ]
> Dec 03 01:10:59 to satisfy the given assertions requirements but none did:
> Dec 03 01:10:59 
> Dec 03 01:10:59 java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
> Dec 03 01:10:59   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
> Dec 03 01:10:59   ...(64 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed)
> Dec 03 01:10:59 error: 
> Dec 03 01:10:59 Expecting actual throwable to be an instance of:
> Dec 03 01:10:59   
> org.apache.kafka.clients.consumer.NoOffsetForPartitionException
> Dec 03 01:10:59 but was:
> Dec 03 01:10:59   java.lang.IllegalStateException: Fail to create topic 
> [groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
> replication factor: 1].
> [...]
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47892=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=36657



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33171][table planner] Consistent implicit type coercion support for equal and non-equal comparisons for codegen [flink]

2023-10-16 Thread via GitHub


fengjiajie commented on PR #23478:
URL: https://github.com/apache/flink/pull/23478#issuecomment-1764364941

   1. When constructing "non-comparable types," there is no problem with 
`testSqlApi("NULL = f30", "NULL")`, but `testSqlApi("NULL = f30", "NULL")` 
throws an exception:
  ```Caused by: org.apache.calcite.sql.validate.SqlValidatorException: 
Cannot apply '=' to arguments of type ' = 
'. 
Supported form(s): ' = '```
   2. When constructing "non-comparable types," I haven't figured out how to 
cover the branch:
```} else if (isReference(right.resultType)) {``` because the preceding 
```if (isReference(left.resultType)) {``` always takes precedence.
   3. I have written an SQL test for MULTISET in the MiscITCase. I'm not sure 
if this is appropriate.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32896][Runtime/Coordination] Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key as initial capacity [flink]

2023-10-16 Thread via GitHub


itonyli commented on PR #23518:
URL: https://github.com/apache/flink/pull/23518#issuecomment-1764353014

   LGTM
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33282) core stage: 137 exit code

2023-10-16 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775734#comment-17775734
 ] 

Matthias Pohl commented on FLINK-33282:
---

yes, could be. I didn't do research on whether it's actually caused by a actual 
Flink issue or whether it's GHA-related.

> core stage: 137 exit code
> -
>
> Key: FLINK-33282
> URL: https://issues.apache.org/jira/browse/FLINK-33282
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> https://github.com/XComp/flink/actions/runs/6529672916/job/17728011881#step:12:8547



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31134) Out of memory error in KafkaSourceE2ECase>SourceTestSuiteBase.testScaleDown

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31134.
--
Resolution: Cannot Reproduce

> Out of memory error in KafkaSourceE2ECase>SourceTestSuiteBase.testScaleDown
> ---
>
> Key: FLINK-31134
> URL: https://issues.apache.org/jira/browse/FLINK-31134
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.3
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46299=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=160c9ae5-96fd-516e-1c91-deb81f59292a=17378
> {code}
> 2023-02-20T02:35:33.6571979Z Feb 20 02:35:33 [ERROR] 
> org.apache.flink.tests.util.kafka.KafkaSourceE2ECase.testScaleDown(TestEnvironment,
>  DataStreamSourceExternalContext, CheckpointingMode)[1]  Time elapsed: 3.864 
> s  <<< FAILURE!
> 2023-02-20T02:35:33.6601326Z Feb 20 02:35:33 java.lang.AssertionError: 
> 2023-02-20T02:35:33.6604200Z Feb 20 02:35:33 
> 2023-02-20T02:35:33.6609074Z Feb 20 02:35:33 Expecting
> 2023-02-20T02:35:33.6609502Z Feb 20 02:35:33the following stack trace:
> 2023-02-20T02:35:33.6612012Z Feb 20 02:35:33 java.lang.RuntimeException: 
> Failed to fetch next result
> 2023-02-20T02:35:33.6619282Z Feb 20 02:35:33  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2023-02-20T02:35:33.6620147Z Feb 20 02:35:33  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2023-02-20T02:35:33.6623837Z Feb 20 02:35:33  at 
> org.apache.flink.connector.testframe.utils.CollectIteratorAssert.compareWithExactlyOnceSemantic(CollectIteratorAssert.java:116)
> 2023-02-20T02:35:33.6624650Z Feb 20 02:35:33  at 
> org.apache.flink.connector.testframe.utils.CollectIteratorAssert.matchesRecordsFromSource(CollectIteratorAssert.java:71)
> 2023-02-20T02:35:33.7035280Z Feb 20 02:35:33  at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.lambda$checkResultWithSemantic$3(SourceTestSuiteBase.java:739)
> 2023-02-20T02:35:33.7041843Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
> 2023-02-20T02:35:33.7048557Z Feb 20 02:35:33  at 
> java.lang.Thread.run(Thread.java:750)
> 2023-02-20T02:35:33.7052749Z Feb 20 02:35:33 Caused by: java.io.IOException: 
> Failed to fetch job execution result
> 2023-02-20T02:35:33.7058304Z Feb 20 02:35:33  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:184)
> 2023-02-20T02:35:33.7059262Z Feb 20 02:35:33  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
> 2023-02-20T02:35:33.7060076Z Feb 20 02:35:33  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> 2023-02-20T02:35:33.7060620Z Feb 20 02:35:33  ... 6 more
> 2023-02-20T02:35:33.7061345Z Feb 20 02:35:33 Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: deb052f89a69f00f2c8df55054c62136)
> 2023-02-20T02:35:33.7062049Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2023-02-20T02:35:33.7066005Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> 2023-02-20T02:35:33.7072200Z Feb 20 02:35:33  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:182)
> 2023-02-20T02:35:33.7077028Z Feb 20 02:35:33  ... 8 more
> 2023-02-20T02:35:33.7081588Z Feb 20 02:35:33 Caused by: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: deb052f89a69f00f2c8df55054c62136)
> 2023-02-20T02:35:33.7088309Z Feb 20 02:35:33  at 
> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:130)
> 2023-02-20T02:35:33.7093780Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2023-02-20T02:35:33.7099183Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2023-02-20T02:35:33.7099867Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2023-02-20T02:35:33.7103374Z Feb 20 02:35:33  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2023-02-20T02:35:33.7165354Z Feb 20 02:35:33  at 
> 

[jira] [Closed] (FLINK-27695) KafkaTransactionLogITCase failed on azure due to Could not find a valid Docker environment

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-27695.
--
Resolution: Cannot Reproduce

> KafkaTransactionLogITCase failed on azure due to Could not find a valid 
> Docker environment
> --
>
> Key: FLINK-27695
> URL: https://issues.apache.org/jira/browse/FLINK-27695
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / Kafka
>Affects Versions: 1.14.4
>Reporter: Huang Xingbo
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> 022-05-19T02:04:23.9190098Z May 19 02:04:23 [ERROR] Tests run: 1, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 7.404 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.sink.KafkaTransactionLogITCase
> 2022-05-19T02:04:23.9191182Z May 19 02:04:23 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaTransactionLogITCase  Time 
> elapsed: 7.404 s  <<< ERROR!
> 2022-05-19T02:04:23.9192250Z May 19 02:04:23 java.lang.IllegalStateException: 
> Could not find a valid Docker environment. Please see logs and check 
> configuration
> 2022-05-19T02:04:23.9193144Z May 19 02:04:23  at 
> org.testcontainers.dockerclient.DockerClientProviderStrategy.lambda$getFirstValidStrategy$4(DockerClientProviderStrategy.java:156)
> 2022-05-19T02:04:23.9194653Z May 19 02:04:23  at 
> java.util.Optional.orElseThrow(Optional.java:290)
> 2022-05-19T02:04:23.9196179Z May 19 02:04:23  at 
> org.testcontainers.dockerclient.DockerClientProviderStrategy.getFirstValidStrategy(DockerClientProviderStrategy.java:148)
> 2022-05-19T02:04:23.9197995Z May 19 02:04:23  at 
> org.testcontainers.DockerClientFactory.getOrInitializeStrategy(DockerClientFactory.java:146)
> 2022-05-19T02:04:23.9199486Z May 19 02:04:23  at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:188)
> 2022-05-19T02:04:23.9200666Z May 19 02:04:23  at 
> org.testcontainers.DockerClientFactory$1.getDockerClient(DockerClientFactory.java:101)
> 2022-05-19T02:04:23.9202109Z May 19 02:04:23  at 
> com.github.dockerjava.api.DockerClientDelegate.authConfig(DockerClientDelegate.java:107)
> 2022-05-19T02:04:23.9203065Z May 19 02:04:23  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:316)
> 2022-05-19T02:04:23.9204641Z May 19 02:04:23  at 
> org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1066)
> 2022-05-19T02:04:23.9205765Z May 19 02:04:23  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
> 2022-05-19T02:04:23.9206568Z May 19 02:04:23  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-05-19T02:04:23.9207497Z May 19 02:04:23  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-05-19T02:04:23.9208246Z May 19 02:04:23  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-05-19T02:04:23.9208887Z May 19 02:04:23  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-05-19T02:04:23.9209691Z May 19 02:04:23  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-05-19T02:04:23.9210490Z May 19 02:04:23  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> 2022-05-19T02:04:23.9211246Z May 19 02:04:23  at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> 2022-05-19T02:04:23.9211989Z May 19 02:04:23  at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> 2022-05-19T02:04:23.9212682Z May 19 02:04:23  at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> 2022-05-19T02:04:23.9213391Z May 19 02:04:23  at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> 2022-05-19T02:04:23.9214305Z May 19 02:04:23  at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> 2022-05-19T02:04:23.9215044Z May 19 02:04:23  at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> 2022-05-19T02:04:23.9215809Z May 19 02:04:23  at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> 2022-05-19T02:04:23.9216576Z May 19 02:04:23  at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> 2022-05-19T02:04:23.9217523Z May 19 02:04:23  at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> 2022-05-19T02:04:23.9218275Z May 19 02:04:23  at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> 2022-05-19T02:04:23.9219099Z May 19 02:04:23  at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:82)
> 

[jira] [Closed] (FLINK-30262) UpsertKafkaTableITCase failed when starting the container because waiting for a port timed out

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-30262.
--
Resolution: Cannot Reproduce

> UpsertKafkaTableITCase failed when starting the container because waiting for 
> a port timed out
> --
>
> Key: FLINK-30262
> URL: https://issues.apache.org/jira/browse/FLINK-30262
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Connectors / Kafka, Test Infrastructure
>Affects Versions: 1.16.0
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> Dec 01 08:35:00 Caused by: 
> org.testcontainers.containers.ContainerLaunchException: Timed out waiting for 
> container port to open (172.17.0.1 ports: [60109, 60110] should be listening)
> Dec 01 08:35:00   at 
> org.testcontainers.containers.wait.strategy.HostPortWaitStrategy.waitUntilReady(HostPortWaitStrategy.java:90)
> Dec 01 08:35:00   at 
> org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:51)
> Dec 01 08:35:00   at 
> org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:926)
> Dec 01 08:35:00   at 
> org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:480)
> Dec 01 08:35:00   ... 33 more
>  {code}
>  
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43643=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203=37366



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26566) FlinkKafkaProducerITCase.testFailAndRecoverSameCheckpointTwice failed on azure

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-26566.
--
Resolution: Cannot Reproduce

> FlinkKafkaProducerITCase.testFailAndRecoverSameCheckpointTwice failed on azure
> --
>
> Key: FLINK-26566
> URL: https://issues.apache.org/jira/browse/FLINK-26566
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.3
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Mar 09 20:11:28 [ERROR] Tests run: 15, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 274.396 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> Mar 09 20:11:28 [ERROR] testFailAndRecoverSameCheckpointTwice  Time elapsed: 
> 74.511 s  <<< FAILURE!
> Mar 09 20:11:28 java.lang.AssertionError: Expected elements: <[42, 43]>, but 
> was: elements: <[42, 43, 42, 43, 42, 43, 42, 43]>
> Mar 09 20:11:28   at org.junit.Assert.fail(Assert.java:89)
> Mar 09 20:11:28   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:331)
> Mar 09 20:11:28   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testFailAndRecoverSameCheckpointTwice(FlinkKafkaProducerITCase.java:316)
> Mar 09 20:11:28   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 09 20:11:28   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 09 20:11:28   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 09 20:11:28   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 20:11:28   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 09 20:11:28   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 09 20:11:28   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 09 20:11:28   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Mar 09 20:11:28   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 09 20:11:28   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
> Mar 09 20:11:28   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Mar 09 20:11:28   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 09 20:11:28   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 09 20:11:28   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 09 20:11:28   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 09 20:11:28   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 09 20:11:28   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 09 20:11:28   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32778=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=7412



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31341) OutOfMemoryError in Kafka e2e tests

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31341.
--
Resolution: Cannot Reproduce

> OutOfMemoryError in Kafka e2e tests
> ---
>
> Key: FLINK-31341
> URL: https://issues.apache.org/jira/browse/FLINK-31341
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> We experience a OOM in Kafka e2e tests:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=46820=logs=fb37c667-81b7-5c22-dd91-846535e99a97=39a035c3-c65e-573c-fb66-104c66c28912=11726
> {code}
> ar 06 06:22:30 [ERROR] Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "ForkJoinPool-1-worker-0"
> Exception in thread "ForkJoinPool-1-worker-0" Mar 06 06:27:30 [ERROR] Tests 
> run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1,094.139 s <<< 
> FAILURE! - in JUnit Jupiter
> Mar 06 06:27:30 [ERROR] JUnit Jupiter.JUnit Jupiter  Time elapsed: 947.463 s  
> <<< ERROR!
> Mar 06 06:27:30 org.junit.platform.commons.JUnitException: TestEngine with ID 
> 'junit-jupiter' failed to execute tests
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:153)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:188)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:128)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
> Mar 06 06:27:30   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
> Mar 06 06:27:30 Caused by: org.junit.platform.commons.JUnitException: Error 
> executing tests for engine junit-jupiter
> Mar 06 06:27:30   at 
> org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:57)
> Mar 06 06:27:30   at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
> Mar 06 06:27:30   ... 16 more
> Mar 06 06:27:30 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.OutOfMemoryError
> Mar 06 06:27:30   at 
> java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006)
> Mar 06 06:27:30   at 
> org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
> Mar 06 06:27:30   ... 17 more
> Mar 06 06:27:30 Caused by: java.lang.OutOfMemoryError
> Mar 06 06:27:30   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Mar 06 06:27:30   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Mar 06 06:27:30   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Mar 06 06:27:30   at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> Mar 06 06:27:30   at 
> 

[jira] [Closed] (FLINK-28818) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee failed with AssertionFailedError

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-28818.
--
Resolution: Cannot Reproduce

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee failed with 
> AssertionFailedError
> 
>
> Key: FLINK-28818
> URL: https://issues.apache.org/jira/browse/FLINK-28818
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-08-04T13:31:52.8933185Z Aug 04 13:31:52 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee
>   Time elapsed: 4.146 s  <<< FAILURE!
> 2022-08-04T13:31:52.8933887Z Aug 04 13:31:52 
> org.opentest4j.AssertionFailedError: 
> 2022-08-04T13:31:52.8936215Z Aug 04 13:31:52 
> 2022-08-04T13:31:52.8936766Z Aug 04 13:31:52 expected: 1664L
> 2022-08-04T13:31:52.8937362Z Aug 04 13:31:52  but was: 1858L
> 2022-08-04T13:31:52.8937955Z Aug 04 13:31:52  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 2022-08-04T13:31:52.8938814Z Aug 04 13:31:52  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 2022-08-04T13:31:52.8939622Z Aug 04 13:31:52  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 2022-08-04T13:31:52.8940522Z Aug 04 13:31:52  at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:399)
> 2022-08-04T13:31:52.8941489Z Aug 04 13:31:52  at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:213)
> 2022-08-04T13:31:52.8942418Z Aug 04 13:31:52  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-08-04T13:31:52.8943081Z Aug 04 13:31:52  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-08-04T13:31:52.8944125Z Aug 04 13:31:52  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-08-04T13:31:52.8944851Z Aug 04 13:31:52  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-08-04T13:31:52.8945519Z Aug 04 13:31:52  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-08-04T13:31:52.8946254Z Aug 04 13:31:52  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-08-04T13:31:52.8947307Z Aug 04 13:31:52  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-08-04T13:31:52.8948048Z Aug 04 13:31:52  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-08-04T13:31:52.8948777Z Aug 04 13:31:52  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-08-04T13:31:52.8949511Z Aug 04 13:31:52  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-08-04T13:31:52.8950206Z Aug 04 13:31:52  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-08-04T13:31:52.8950885Z Aug 04 13:31:52  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-08-04T13:31:52.8951656Z Aug 04 13:31:52  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-08-04T13:31:52.8952421Z Aug 04 13:31:52  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-08-04T13:31:52.8953092Z Aug 04 13:31:52  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-08-04T13:31:52.8953802Z Aug 04 13:31:52  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-08-04T13:31:52.8954499Z Aug 04 13:31:52  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-08-04T13:31:52.8955195Z Aug 04 13:31:52  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-08-04T13:31:52.8955939Z Aug 04 13:31:52  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-08-04T13:31:52.8956600Z Aug 04 13:31:52  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-08-04T13:31:52.8957229Z Aug 04 13:31:52  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-08-04T13:31:52.8957880Z Aug 04 13:31:52  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-08-04T13:31:52.8958531Z Aug 04 13:31:52  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-08-04T13:31:52.8959336Z Aug 04 

[jira] [Closed] (FLINK-26533) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee failed on azure due to delete topic timeout

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-26533.
--
Resolution: Cannot Reproduce

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee failed on azure due to 
> delete topic timeout
> 
>
> Key: FLINK-26533
> URL: https://issues.apache.org/jira/browse/FLINK-26533
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.3
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Mar 07 02:42:17 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 174.077 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase
> Mar 07 02:42:17 [ERROR] testRecoveryWithAtLeastOnceGuarantee  Time elapsed: 
> 63.913 s  <<< ERROR!
> Mar 07 02:42:17 java.util.concurrent.TimeoutException
> Mar 07 02:42:17   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
> Mar 07 02:42:17   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
> Mar 07 02:42:17   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.deleteTestTopic(KafkaSinkITCase.java:429)
> Mar 07 02:42:17   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.tearDown(KafkaSinkITCase.java:160)
> Mar 07 02:42:17   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Mar 07 02:42:17   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 07 02:42:17   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 07 02:42:17   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Mar 07 02:42:17   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Mar 07 02:42:17   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Mar 07 02:42:17   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Mar 07 02:42:17   at 
> org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
> Mar 07 02:42:17   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> Mar 07 02:42:17   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Mar 07 02:42:17   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Mar 07 02:42:17   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Mar 07 02:42:17   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Mar 07 02:42:17   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Mar 07 02:42:17   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Mar 07 02:42:17   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Mar 07 02:42:17   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Mar 07 02:42:17   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Mar 07 02:42:17   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Mar 07 02:42:17   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Mar 07 02:42:17   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32582=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=918e890f-5ed9-5212-a25e-962628fb4bc5=7345



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25966) KafkaSourceITCase failed on azure due to Create test topic : topic1 failed

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25966.
--
Resolution: Cannot Reproduce

> KafkaSourceITCase failed on azure due to Create test topic : topic1 failed
> --
>
> Key: FLINK-25966
> URL: https://issues.apache.org/jira/browse/FLINK-25966
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.3
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Feb 07 00:50:05 [ERROR] 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase  Time elapsed: 
> 136.316 s  <<< FAILURE!
> Feb 07 00:50:05 java.lang.AssertionError: Create test topic : topic1 failed, 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.
> Feb 07 00:50:05   at org.junit.Assert.fail(Assert.java:88)
> Feb 07 00:50:05   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
> Feb 07 00:50:05   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
> Feb 07 00:50:05   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:221)
> Feb 07 00:50:05   at 
> org.apache.flink.connector.kafka.source.KafkaSourceTestEnv.createTestTopic(KafkaSourceTestEnv.java:187)
> Feb 07 00:50:05   at 
> org.apache.flink.connector.kafka.source.KafkaSourceTestEnv.setupTopic(KafkaSourceTestEnv.java:224)
> Feb 07 00:50:05   at 
> org.apache.flink.connector.kafka.source.KafkaSourceITCase.setup(KafkaSourceITCase.java:67)
> Feb 07 00:50:05   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 07 00:50:05   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 07 00:50:05   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 07 00:50:05   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 00:50:05   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Feb 07 00:50:05   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 07 00:50:05   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Feb 07 00:50:05   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> Feb 07 00:50:05   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Feb 07 00:50:05   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> Feb 07 00:50:05   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30816=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518=7426



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-27883) KafkaSubscriberTest failed with NPC

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-27883.
--
Resolution: Cannot Reproduce

> KafkaSubscriberTest failed with NPC
> ---
>
> Key: FLINK-27883
> URL: https://issues.apache.org/jira/browse/FLINK-27883
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-06-02T01:42:45.0924799Z Jun 02 01:42:45 [ERROR] Tests run: 1, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 66.787 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest
> 2022-06-02T01:42:45.0926067Z Jun 02 01:42:45 [ERROR] 
> org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest
>   Time elapsed: 66.787 s  <<< FAILURE!
> 2022-06-02T01:42:45.0926867Z Jun 02 01:42:45 
> org.opentest4j.MultipleFailuresError: 
> 2022-06-02T01:42:45.0927608Z Jun 02 01:42:45 Multiple Failures (2 failures)
> 2022-06-02T01:42:45.0928626Z Jun 02 01:42:45  java.lang.AssertionError: 
> Create test topic : topic1 failed, 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> 2022-06-02T01:42:45.0929717Z Jun 02 01:42:45  java.lang.NullPointerException: 
> 
> 2022-06-02T01:42:45.0930482Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
> 2022-06-02T01:42:45.0931579Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
> 2022-06-02T01:42:45.0932685Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.RunListenerAdapter.testRunFinished(RunListenerAdapter.java:93)
> 2022-06-02T01:42:45.0933736Z Jun 02 01:42:45  at 
> org.junit.runner.notification.SynchronizedRunListener.testRunFinished(SynchronizedRunListener.java:42)
> 2022-06-02T01:42:45.0934600Z Jun 02 01:42:45  at 
> org.junit.runner.notification.RunNotifier$2.notifyListener(RunNotifier.java:103)
> 2022-06-02T01:42:45.0935437Z Jun 02 01:42:45  at 
> org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
> 2022-06-02T01:42:45.0936147Z Jun 02 01:42:45  at 
> org.junit.runner.notification.RunNotifier.fireTestRunFinished(RunNotifier.java:100)
> 2022-06-02T01:42:45.0936807Z Jun 02 01:42:45  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:138)
> 2022-06-02T01:42:45.0937370Z Jun 02 01:42:45  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-06-02T01:42:45.0938011Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-06-02T01:42:45.0938756Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> 2022-06-02T01:42:45.0939480Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> 2022-06-02T01:42:45.0940304Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
> 2022-06-02T01:42:45.0941136Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
> 2022-06-02T01:42:45.0942000Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
> 2022-06-02T01:42:45.0943056Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
> 2022-06-02T01:42:45.0944171Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
> 2022-06-02T01:42:45.0944945Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> 2022-06-02T01:42:45.0945756Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> 2022-06-02T01:42:45.0946607Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> 2022-06-02T01:42:45.0947618Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> 2022-06-02T01:42:45.0948525Z Jun 02 01:42:45  at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.lambda$execute$1(JUnitPlatformProvider.java:199)
> 2022-06-02T01:42:45.0949401Z Jun 02 01:42:45  at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> 2022-06-02T01:42:45.0950119Z Jun 02 01:42:45  at 
> 

[jira] [Closed] (FLINK-28545) FlinkKafkaProducerITCase.testRestoreToCheckpointAfterExceedingProducersPool failed with TimeoutException: Topic flink-kafka-producer-fail-before-notify not present in me

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-28545.
--
Resolution: Cannot Reproduce

> FlinkKafkaProducerITCase.testRestoreToCheckpointAfterExceedingProducersPool  
> failed with TimeoutException: Topic flink-kafka-producer-fail-before-notify 
> not present in metadata after 6 ms
> ---
>
> Key: FLINK-28545
> URL: https://issues.apache.org/jira/browse/FLINK-28545
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.1
>Reporter: Huang Xingbo
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-07-13T09:49:00.4699245Z Jul 13 09:49:00 [ERROR] 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRestoreToCheckpointAfterExceedingProducersPool
>   Time elapsed: 243.968 s  <<< ERROR!
> 2022-07-13T09:49:00.4700438Z Jul 13 09:49:00 
> org.apache.kafka.common.errors.TimeoutException: Topic 
> flink-kafka-producer-fail-before-notify not present in metadata after 6 
> ms.
> 2022-07-13T09:49:00.4702497Z Jul 13 09:49:00 
> 2022-07-13T09:49:00.8707199Z Jul 13 09:49:00 [INFO] 
> 2022-07-13T09:49:00.8708010Z Jul 13 09:49:00 [INFO] Results:
> 2022-07-13T09:49:00.8708577Z Jul 13 09:49:00 [INFO] 
> 2022-07-13T09:49:00.8709134Z Jul 13 09:49:00 [ERROR] Errors: 
> 2022-07-13T09:49:00.8711253Z Jul 13 09:49:00 [ERROR]   
> FlinkKafkaProducerITCase.testRestoreToCheckpointAfterExceedingProducersPool » 
> Timeout
> 2022-07-13T09:49:00.8712471Z Jul 13 09:49:00 [INFO] 
> 2022-07-13T09:49:00.8713163Z Jul 13 09:49:00 [ERROR] Tests run: 209, 
> Failures: 0, Errors: 1, Skipped: 1
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=38129=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-20928) KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete:189->pollUntil:270 » Timeout

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-20928.
--
Resolution: Cannot Reproduce

> KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete:189->pollUntil:270 
> » Timeout
> ---
>
> Key: FLINK-20928
> URL: https://issues.apache.org/jira/browse/FLINK-20928
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0, 1.12.4, 1.15.0
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11861=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5
> {code}
> [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 93.992 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
> [ERROR] 
> testOffsetCommitOnCheckpointComplete(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.086 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:270)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete(KafkaSourceReaderTest.java:189)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25624) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25624.
--
Resolution: Cannot Reproduce

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee blocked on azure pipeline
> --
>
> Key: FLINK-25624
> URL: https://issues.apache.org/jira/browse/FLINK-25624
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> "main" #1 prio=5 os_prio=0 tid=0x7fda8c00b000 nid=0x21b2 waiting on 
> condition [0x7fda92dd7000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x826165c0> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1989)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1951)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:335)
>   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29285=logs=c5612577-f1f7-5977-6ff6-7432788526f7=ffa8837a-b445-534e-cdf4-db364cf8235d=42106



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25449) KafkaSourceITCase.testRedundantParallelism failed on azure

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25449.
--
Resolution: Cannot Reproduce

> KafkaSourceITCase.testRedundantParallelism failed on azure
> --
>
> Key: FLINK-25449
> URL: https://issues.apache.org/jira/browse/FLINK-25449
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.5
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Dec 25 00:51:07 Caused by: java.lang.RuntimeException: One or more fetchers 
> have encountered exception
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:223)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:305)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:69)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:423)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:204)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:684)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:639)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623)
> Dec 25 00:51:07   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779)
> Dec 25 00:51:07   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
> Dec 25 00:51:07   at java.lang.Thread.run(Thread.java:748)
> Dec 25 00:51:07 Caused by: java.lang.RuntimeException: SplitFetcher thread 0 
> received unexpected exception while polling the records
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:148)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:103)
> Dec 25 00:51:07   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Dec 25 00:51:07   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Dec 25 00:51:07   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Dec 25 00:51:07   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Dec 25 00:51:07   ... 1 more
> Dec 25 00:51:07 Caused by: java.lang.IllegalStateException: Consumer is not 
> subscribed to any topics or assigned any partitions
> Dec 25 00:51:07   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1223)
> Dec 25 00:51:07   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:108)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:140)
> Dec 25 00:51:07   ... 6 more
>  {code}
>  
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28589=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6612



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33277) Upgrading to actions/checkout@v4 requires GLIBC 2.25, 2.27, or 2.28 to be installed, apparently

2023-10-16 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775732#comment-17775732
 ] 

Matthias Pohl commented on FLINK-33277:
---

None that I know of. I haven't looked into it, yet but just created the issue 
to have this covered as a subtask for now.

> Upgrading to actions/checkout@v4 requires GLIBC 2.25, 2.27, or 2.28 to be 
> installed, apparently
> ---
>
> Key: FLINK-33277
> URL: https://issues.apache.org/jira/browse/FLINK-33277
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Priority: Major
>
> https://github.com/XComp/flink/actions/runs/6525835575/job/17718926296#step:5:64



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25451) KafkaEnumeratorTest.testDiscoverPartitionsPeriodically failed on azure

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25451.
--
Resolution: Cannot Reproduce

> KafkaEnumeratorTest.testDiscoverPartitionsPeriodically failed on azure
> --
>
> Key: FLINK-25451
> URL: https://issues.apache.org/jira/browse/FLINK-25451
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Dec 25 04:38:34 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 58.393 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.enumerator.KafkaEnumeratorTest
> Dec 25 04:38:34 [ERROR] 
> org.apache.flink.connector.kafka.source.enumerator.KafkaEnumeratorTest.testDiscoverPartitionsPeriodically
>   Time elapsed: 30.01 s  <<< ERROR!
> Dec 25 04:38:34 org.junit.runners.model.TestTimedOutException: test timed out 
> after 3 milliseconds
> Dec 25 04:38:34   at java.lang.Object.wait(Native Method)
> Dec 25 04:38:34   at java.lang.Object.wait(Object.java:502)
> Dec 25 04:38:34   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
> Dec 25 04:38:34   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> Dec 25 04:38:34   at 
> org.apache.flink.connector.kafka.source.enumerator.KafkaEnumeratorTest.testDiscoverPartitionsPeriodically(KafkaEnumeratorTest.java:221)
> Dec 25 04:38:34   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 25 04:38:34   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 25 04:38:34   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 25 04:38:34   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 25 04:38:34   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 25 04:38:34   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Dec 25 04:38:34   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Dec 25 04:38:34   at java.lang.Thread.run(Thread.java:748)
> Dec 25 04:38:34 
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28590=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6549



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25456) FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-25456.
--
Resolution: Cannot Reproduce

> FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint
> ---
>
> Key: FLINK-25456
> URL: https://issues.apache.org/jira/browse/FLINK-25456
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.2, 1.15.0
>Reporter: Till Rohrmann
>Priority: Not a Priority
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-deprioritized-minor, test-stability
>
> The test {{FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint}} 
> fails with
> {code}
> 2021-12-27T02:54:54.8464375Z Dec 27 02:54:54 [ERROR] Tests run: 15, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 285.279 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2021-12-27T02:54:54.8465354Z Dec 27 02:54:54 [ERROR] 
> testScaleDownBeforeFirstCheckpoint  Time elapsed: 85.514 s  <<< FAILURE!
> 2021-12-27T02:54:54.8468827Z Dec 27 02:54:54 java.lang.AssertionError: 
> Detected producer leak. Thread name: kafka-producer-network-thread | 
> producer-MockTask-002a002c-18
> 2021-12-27T02:54:54.8469779Z Dec 27 02:54:54  at 
> org.junit.Assert.fail(Assert.java:89)
> 2021-12-27T02:54:54.8470485Z Dec 27 02:54:54  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.checkProducerLeak(FlinkKafkaProducerITCase.java:847)
> 2021-12-27T02:54:54.8471842Z Dec 27 02:54:54  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducerITCase.java:381)
> 2021-12-27T02:54:54.8472724Z Dec 27 02:54:54  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-12-27T02:54:54.8473509Z Dec 27 02:54:54  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-12-27T02:54:54.8474704Z Dec 27 02:54:54  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-12-27T02:54:54.8475523Z Dec 27 02:54:54  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2021-12-27T02:54:54.8476258Z Dec 27 02:54:54  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-12-27T02:54:54.8476949Z Dec 27 02:54:54  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-12-27T02:54:54.8477632Z Dec 27 02:54:54  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-12-27T02:54:54.8478451Z Dec 27 02:54:54  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-12-27T02:54:54.8479282Z Dec 27 02:54:54  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-12-27T02:54:54.8479976Z Dec 27 02:54:54  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
> 2021-12-27T02:54:54.8480696Z Dec 27 02:54:54  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-12-27T02:54:54.8481410Z Dec 27 02:54:54  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-12-27T02:54:54.8482009Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-12-27T02:54:54.8482636Z Dec 27 02:54:54  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-12-27T02:54:54.8483267Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-12-27T02:54:54.8483900Z Dec 27 02:54:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-12-27T02:54:54.8484574Z Dec 27 02:54:54  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-12-27T02:54:54.8485214Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-12-27T02:54:54.8485838Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-12-27T02:54:54.8486441Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2021-12-27T02:54:54.8487037Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2021-12-27T02:54:54.8487620Z Dec 27 02:54:54  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2021-12-27T02:54:54.8488391Z Dec 27 02:54:54  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-12-27T02:54:54.8489050Z Dec 27 02:54:54  at 
> 

[jira] [Closed] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-22194.
--
Resolution: Cannot Reproduce

> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to 
> commit timeout
> --
>
> Key: FLINK-22194
> URL: https://issues.apache.org/jira/browse/FLINK-22194
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0, 1.12.4, 1.15.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6535
> {code:java}
> [ERROR] 
> testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.123 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-16634) The PartitionDiscoverer in FlinkKafkaConsumer should not use the user provided client.id.

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-16634.
--
  Assignee: (was: Fangliang Liu)
Resolution: Won't Fix

Given that FlinkKafkaConsumer is deprecated over KafkaSource, marking this as a 
won't fix. In case this is still a problem with KafkaSource, ping this ticket 
and we can re-open it

> The PartitionDiscoverer in FlinkKafkaConsumer should not use the user 
> provided client.id.
> -
>
> Key: FLINK-16634
> URL: https://issues.apache.org/jira/browse/FLINK-16634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-assigned
>
> The {{PartitionDiscoverer}} creates a {{KafkaConsumer}} using the client.id 
> from the user provided properties. This may cause the MBean to collide with 
> the fetching {{KafkaConsumer}}. The {{PartitionDiscoverer}} should use a 
> unique client.id instead, such as "PartitionDiscoverer-RANDOM_LONG"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-20777.
--
Resolution: Information Provided

Superseded by 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-288%3A+Enable+Dynamic+Partition+Discovery+by+Default+in+Kafka+Source

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2023-10-16 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-20777:
---
Fix Version/s: (was: 1.12.8)

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33171) Consistent implicit type coercion support for equal and non-equal comparisons for codegen

2023-10-16 Thread Feng Jiajie (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Jiajie updated FLINK-33171:

Summary: Consistent implicit type coercion support for equal and non-equal 
comparisons for codegen  (was: Consistent implicit type coercion support for 
equal and non-equi comparison for codegen)

> Consistent implicit type coercion support for equal and non-equal comparisons 
> for codegen
> -
>
> Key: FLINK-33171
> URL: https://issues.apache.org/jira/browse/FLINK-33171
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Feng Jiajie
>Assignee: Feng Jiajie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.2, 1.18.1
>
>
> When executing the following SQL:
> {code:sql}
> SELECT
> time1,
> time1 = '2023-09-30 18:22:42.123' AS eq1,
> NOT (time1 = '2023-09-30 18:22:42.123') AS notEq1
> FROM table1;
> {code}
> the result is as follows:
> {code:java}
> ++-+++
> | op |   time1 |eq1 | notEq1 |
> ++-+++
> | +I | 2023-09-30 18:22:42.123 |   TRUE |   TRUE |
> | +I | 2023-09-30 18:22:42.124 |  FALSE |   TRUE |
> ++-+++
> 2 rows in set
> {code}
> The "notEq1" in the first row should be FALSE.
> Here is the reproducing code:
> {code:java}
> import org.apache.flink.api.common.functions.RichMapFunction;
> import org.apache.flink.api.common.typeinfo.TypeInformation;
> import org.apache.flink.api.common.typeinfo.Types;
> import org.apache.flink.api.java.typeutils.RowTypeInfo;
> import org.apache.flink.configuration.Configuration;
> import org.apache.flink.streaming.api.datastream.DataStreamSource;
> import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.table.api.DataTypes;
> import org.apache.flink.table.api.Schema;
> import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
> import org.apache.flink.types.Row;
> public class TimePointNotEqualTest {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env =
> StreamExecutionEnvironment.getExecutionEnvironment(new 
> Configuration());
> env.setParallelism(1);
> DataStreamSource longDataStreamSource = env.fromSequence(0, 1);
> RowTypeInfo rowTypeInfo =
> new RowTypeInfo(new TypeInformation[] {Types.LONG}, new 
> String[] {"time1"});
> SingleOutputStreamOperator map =
> longDataStreamSource.map(new RichMapFunction() {
> @Override
> public Row map(Long value) {
> Row row = new Row(1);
> row.setField(0, 1696069362123L + value);
> return row;
> }
> }, rowTypeInfo);
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> Schema schema = Schema.newBuilder()
> .column("time1", 
> DataTypes.TIMESTAMP_LTZ(3).bridgedTo(Long.class))
> .build();
> tableEnv.createTemporaryView("table1", map, schema);
> tableEnv.sqlQuery("SELECT "
> + "time1," // 2023-09-30 18:22:42.123
> + "time1 = '2023-09-30 18:22:42.123' AS eq1," // expect TRUE
> + "NOT (time1 = '2023-09-30 18:22:42.123') AS notEq1 " // 
> expect FALSE but TRUE
> + "FROM table1").execute().print();
> }
> }
> {code}
> I would like to attempt to fix this issue. If possible, please assign the 
> issue to me. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33171) Consistent implicit type coercion support for equal and non-equi comparison for codegen

2023-10-16 Thread Feng Jiajie (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Jiajie updated FLINK-33171:

Summary: Consistent implicit type coercion support for equal and non-equi 
comparison for codegen  (was: Table SQL support Not Equal for TimePoint type 
and TimeString)

> Consistent implicit type coercion support for equal and non-equi comparison 
> for codegen
> ---
>
> Key: FLINK-33171
> URL: https://issues.apache.org/jira/browse/FLINK-33171
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Feng Jiajie
>Assignee: Feng Jiajie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.2, 1.18.1
>
>
> When executing the following SQL:
> {code:sql}
> SELECT
> time1,
> time1 = '2023-09-30 18:22:42.123' AS eq1,
> NOT (time1 = '2023-09-30 18:22:42.123') AS notEq1
> FROM table1;
> {code}
> the result is as follows:
> {code:java}
> ++-+++
> | op |   time1 |eq1 | notEq1 |
> ++-+++
> | +I | 2023-09-30 18:22:42.123 |   TRUE |   TRUE |
> | +I | 2023-09-30 18:22:42.124 |  FALSE |   TRUE |
> ++-+++
> 2 rows in set
> {code}
> The "notEq1" in the first row should be FALSE.
> Here is the reproducing code:
> {code:java}
> import org.apache.flink.api.common.functions.RichMapFunction;
> import org.apache.flink.api.common.typeinfo.TypeInformation;
> import org.apache.flink.api.common.typeinfo.Types;
> import org.apache.flink.api.java.typeutils.RowTypeInfo;
> import org.apache.flink.configuration.Configuration;
> import org.apache.flink.streaming.api.datastream.DataStreamSource;
> import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.table.api.DataTypes;
> import org.apache.flink.table.api.Schema;
> import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
> import org.apache.flink.types.Row;
> public class TimePointNotEqualTest {
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env =
> StreamExecutionEnvironment.getExecutionEnvironment(new 
> Configuration());
> env.setParallelism(1);
> DataStreamSource longDataStreamSource = env.fromSequence(0, 1);
> RowTypeInfo rowTypeInfo =
> new RowTypeInfo(new TypeInformation[] {Types.LONG}, new 
> String[] {"time1"});
> SingleOutputStreamOperator map =
> longDataStreamSource.map(new RichMapFunction() {
> @Override
> public Row map(Long value) {
> Row row = new Row(1);
> row.setField(0, 1696069362123L + value);
> return row;
> }
> }, rowTypeInfo);
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> Schema schema = Schema.newBuilder()
> .column("time1", 
> DataTypes.TIMESTAMP_LTZ(3).bridgedTo(Long.class))
> .build();
> tableEnv.createTemporaryView("table1", map, schema);
> tableEnv.sqlQuery("SELECT "
> + "time1," // 2023-09-30 18:22:42.123
> + "time1 = '2023-09-30 18:22:42.123' AS eq1," // expect TRUE
> + "NOT (time1 = '2023-09-30 18:22:42.123') AS notEq1 " // 
> expect FALSE but TRUE
> + "FROM table1").execute().print();
> }
> }
> {code}
> I would like to attempt to fix this issue. If possible, please assign the 
> issue to me. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33236) Deprecate the unused high-availability.zookeeper.path.running-registry option

2023-10-16 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-33236.
---
Resolution: Fixed

master: 935188f06a8a94b3a2c991a8aa6e48b2bfaeee70

> Deprecate the unused high-availability.zookeeper.path.running-registry option
> -
>
> Key: FLINK-33236
> URL: https://issues.apache.org/jira/browse/FLINK-33236
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Configuration
>Affects Versions: 1.18.0
>Reporter: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The running registry subcomponent of Flink HA has been removed in FLINK-25430 
> and the "high-availability.zookeeper.path.running-registry" option is of no 
> use after that. We should deprecate the option and regenerate the config doc 
> to avoid user confusion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33236][config] Deprecate the unused high-availability.zookeeper.path.running-registry option [flink]

2023-10-16 Thread via GitHub


XComp merged PR #23506:
URL: https://github.com/apache/flink/pull/23506


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33284) core stage: HistoryServerStaticFileServerHandlerTest.testRespondWithFile(Path)

2023-10-16 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775706#comment-17775706
 ] 

Chesnay Schepler commented on FLINK-33284:
--

Same cause as FLINK-32283.

> core stage: HistoryServerStaticFileServerHandlerTest.testRespondWithFile(Path)
> --
>
> Key: FLINK-33284
> URL: https://issues.apache.org/jira/browse/FLINK-33284
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> https://github.com/XComp/flink/actions/runs/6525957588/job/17719339666#step:10:12209
> {code}
> Error: 20:06:13 20:06:13.081 [ERROR] 
> org.apache.flink.runtime.webmonitor.history.HistoryServerStaticFileServerHandlerTest.testRespondWithFile(Path)
>   Time elapsed: 1.981 s  <<< FAILURE!
> Oct 15 20:06:13 org.opentest4j.AssertionFailedError: 
> Oct 15 20:06:13 
> Oct 15 20:06:13 expected: 200
> Oct 15 20:06:13  but was: 404
> Oct 15 20:06:13   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> Oct 15 20:06:13   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Oct 15 20:06:13   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Oct 15 20:06:13   at 
> org.apache.flink.runtime.webmonitor.history.HistoryServerStaticFileServerHandlerTest.testRespondWithFile(HistoryServerStaticFileServerHandlerTest.java:70)
> Oct 15 20:06:13   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33283) core stage: WebFrontendBootstrapTest.testHandlersMustBeLoaded

2023-10-16 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775705#comment-17775705
 ] 

Chesnay Schepler commented on FLINK-33283:
--

Probably caused by the WebUI never being built.

> core stage: WebFrontendBootstrapTest.testHandlersMustBeLoaded
> -
>
> Key: FLINK-33283
> URL: https://issues.apache.org/jira/browse/FLINK-33283
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> [https://github.com/XComp/flink/actions/runs/6525957588/job/17719339666#step:10:12279]
> {code:java}
>  Error: 20:06:13 20:06:13.132 [ERROR] 
> org.apache.flink.runtime.webmonitor.utils.WebFrontendBootstrapTest.testHandlersMustBeLoaded
>   Time elapsed: 2.298 s  <<< FAILURE!
> 12279Oct 15 20:06:13 org.opentest4j.AssertionFailedError: 
> 12280Oct 15 20:06:13 
> 12281Oct 15 20:06:13 expected: 404
> 12282Oct 15 20:06:13  but was: 200
> 12283Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> 12284Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> 12285Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 12286Oct 15 20:06:13  at 
> org.apache.flink.runtime.webmonitor.utils.WebFrontendBootstrapTest.testHandlersMustBeLoaded(WebFrontendBootstrapTest.java:89)
> 12287Oct 15 20:06:13  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [...]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33277) Upgrading to actions/checkout@v4 requires GLIBC 2.25, 2.27, or 2.28 to be installed, apparently

2023-10-16 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775697#comment-17775697
 ] 

Chesnay Schepler commented on FLINK-33277:
--

Is there a strong benefit in using v4? Connector CI is using v3.

> Upgrading to actions/checkout@v4 requires GLIBC 2.25, 2.27, or 2.28 to be 
> installed, apparently
> ---
>
> Key: FLINK-33277
> URL: https://issues.apache.org/jira/browse/FLINK-33277
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Priority: Major
>
> https://github.com/XComp/flink/actions/runs/6525835575/job/17718926296#step:5:64



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360489801


##
docs/content/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.
+
+ Introduce Flink Jdbc Driver For Sql Gateway
+# [FLINK-31496](https://issues.apache.org/jira/browse/FLINK-31496)
+Apache Flink now supports JDBC driver to access sql-gateway, you can use the 
driver in any cases that
+support standard JDBC extension to connect to Flink cluster.
+
+ Extend watermark-related features for SQL
+# [FLINK-31535](https://issues.apache.org/jira/browse/FLINK-31535)
+Flink now enables user config watermark emit strategy/watermark 
alignment/watermark idle-timeout
+in Flink sql job with dynamic table options and 'Options' hint.

Review Comment:
   ```suggestion
   in Flink sql job with dynamic table options and `OPTIONS` hint.
   ```
   Since most of the times in docs hints are mentioned in uppercase + added 
code mark



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360486457


##
docs/content/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.

Review Comment:
   ```suggestion
   Introduction of the new ConfigOption `DISPLAY_MAX_COLUMN_WIDTH` 
(`table.display.max-column-width`) 
   in the `TableConfigOptions` class is now in place. 
   This option is utilized when displaying table results through the Table API 
and SQL Client. 
   As SQL Client relies on the Table API underneath, and both SQL Client and 
the Table API serve distinct 
   and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
   This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
   
   During the migration phase, while `sql-client.display.max-column-width` is 
deprecated, 
   any changes made to sql-client.display.max-column-width will be 
automatically transferred to `table.display.max-column-width`. 
   Caution is advised when using the CLI, as it is not recommended to switch 
back and forth between these two options.
   ```
   use SQL Client as in most of the docs + added code mark



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360488280


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 

Review Comment:
   ```suggestion
   As sqlClient relies on the Table API underneath, and both SQL Client and the 
Table API serve distinct 
   ```



##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 

Review Comment:
   ```suggestion
   As SQL Client relies on the Table API underneath, and both SQL Client and 
the Table API serve distinct 
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360487410


##
docs/content/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.
+
+ Introduce Flink Jdbc Driver For Sql Gateway
+# [FLINK-31496](https://issues.apache.org/jira/browse/FLINK-31496)
+Apache Flink now supports JDBC driver to access sql-gateway, you can use the 
driver in any cases that
+support standard JDBC extension to connect to Flink cluster.
+
+ Extend watermark-related features for SQL
+# [FLINK-31535](https://issues.apache.org/jira/browse/FLINK-31535)
+Flink now enables user config watermark emit strategy/watermark 
alignment/watermark idle-timeout
+in Flink sql job with dynamic table options and 'Options' hint.
+
+ Support configuring CatalogStore in Table API
+# [FLINK-32431](https://issues.apache.org/jira/browse/FLINK-32431)
+Support lazy initialization of catalog and persistence of catalog 
configuration.
+
+ Deprecate ManagedTable related APIs
+# [FLINK-32656](https://issues.apache.org/jira/browse/FLINK-32656)
+ManagedTable related APIs are deprecated and will be removed in a future major 
release.
+
+### Connectors & Libraries
+
+ SplitReader doesn't extend AutoCloseable but implements close
+# [FLINK-31015](https://issues.apache.org/jira/browse/FLINK-31015)
+SplitReader interface now extends AutoCloseable instead of providing its own 
method signature.
+
+ JSON format supports projection push down
+# [FLINK-32610](https://issues.apache.org/jira/browse/FLINK-32610)
+The JSON format introduced JsonParser as a new default way to deserialize JSON 
data. 
+JsonParser is a Jackson JSON streaming API to read JSON data which is much 
faster 
+and consumes less memory compared to the previous JsonNode approach. 
+This should be a compatible change, if you encounter any issues after 
upgrading, 
+you can fallback to the previous JsonNode approach by setting 
`json.decode.json-parser.enabled` to `false`. 
+
+
+
+### Runtime & Coordination
+
+ Unifying the Implementation of SlotManager
+# [FLINK-31439](https://issues.apache.org/jira/browse/FLINK-31439)
+Fine-grained resource management are now enabled by default. You can use it by 
specifying the resource requirement. 
+More details can be found at 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/finegrained_resource/#usage.
+
+ Watermark aggregation performance is poor when watermark alignment is 
enabled and parallelism is high
+# [FLINK-32420](https://issues.apache.org/jira/browse/FLINK-32420)
+This performance improvement would be good to mention in the release blog 
post. 
+
+As proven by the micro benchmarks (screenshots attached in the ticket), with 
5000 subtasks, 
+the time to calculate the watermark alignment on the JobManager by a factor of 
76x (7664%). 
+Previously such large jobs were actually at large risk of overloading 
JobManager, now that's far less likely to happen.
+
+ Replace Akka by Pekko
+# [FLINK-32468](https://issues.apache.org/jira/browse/32468)
+Flink's RPC framework is now based on Apache Pekko instead of Akka. Any Akka 
dependencies were removed.
+
+ Introduce Runtime Filter for Flink Batch Jobs
+# 

Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360486457


##
docs/content/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.

Review Comment:
   ```suggestion
   Introduction of the new ConfigOption `DISPLAY_MAX_COLUMN_WIDTH` 
(`table.display.max-column-width`) 
   in the `TableConfigOptions` class is now in place. 
   This option is utilized when displaying table results through the Table API 
and SQL Client. 
   As sqlClient relies on the Table API underneath, and both SQL Client and the 
Table API serve distinct 
   and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
   This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
   
   During the migration phase, while `sql-client.display.max-column-width` is 
deprecated, 
   any changes made to sql-client.display.max-column-width will be 
automatically transferred to `table.display.max-column-width`. 
   Caution is advised when using the CLI, as it is not recommended to switch 
back and forth between these two options.
   ```
   use SQL Client as in most of the docs + added code mark



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360484701


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.
+
+ Introduce Flink Jdbc Driver For Sql Gateway
+# [FLINK-31496](https://issues.apache.org/jira/browse/FLINK-31496)
+Apache Flink now supports JDBC driver to access sql-gateway, you can use the 
driver in any cases that
+support standard JDBC extension to connect to Flink cluster.
+
+ Extend watermark-related features for SQL
+# [FLINK-31535](https://issues.apache.org/jira/browse/FLINK-31535)
+Flink now enables user config watermark emit strategy/watermark 
alignment/watermark idle-timeout
+in Flink sql job with dynamic table options and 'Options' hint.
+
+ Support configuring CatalogStore in Table API
+# [FLINK-32431](https://issues.apache.org/jira/browse/FLINK-32431)
+Support lazy initialization of catalog and persistence of catalog 
configuration.
+
+ Deprecate ManagedTable related APIs
+# [FLINK-32656](https://issues.apache.org/jira/browse/FLINK-32656)
+ManagedTable related APIs are deprecated and will be removed in a future major 
release.
+
+### Connectors & Libraries
+
+ SplitReader doesn't extend AutoCloseable but implements close
+# [FLINK-31015](https://issues.apache.org/jira/browse/FLINK-31015)
+SplitReader interface now extends AutoCloseable instead of providing its own 
method signature.
+
+ JSON format supports projection push down
+# [FLINK-32610](https://issues.apache.org/jira/browse/FLINK-32610)
+The JSON format introduced JsonParser as a new default way to deserialize JSON 
data. 
+JsonParser is a Jackson JSON streaming API to read JSON data which is much 
faster 
+and consumes less memory compared to the previous JsonNode approach. 
+This should be a compatible change, if you encounter any issues after 
upgrading, 
+you can fallback to the previous JsonNode approach by setting 
`json.decode.json-parser.enabled` to `false`. 
+
+
+
+### Runtime & Coordination
+
+ Unifying the Implementation of SlotManager
+# [FLINK-31439](https://issues.apache.org/jira/browse/FLINK-31439)
+Fine-grained resource management are now enabled by default. You can use it by 
specifying the resource requirement. 
+More details can be found at 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/finegrained_resource/#usage.
+
+ Watermark aggregation performance is poor when watermark alignment is 
enabled and parallelism is high
+# [FLINK-32420](https://issues.apache.org/jira/browse/FLINK-32420)
+This performance improvement would be good to mention in the release blog 
post. 
+
+As proven by the micro benchmarks (screenshots attached in the ticket), with 
5000 subtasks, 
+the time to calculate the watermark alignment on the JobManager by a factor of 
76x (7664%). 
+Previously such large jobs were actually at large risk of overloading 
JobManager, now that's far less likely to happen.
+
+ Replace Akka by Pekko
+# [FLINK-32468](https://issues.apache.org/jira/browse/32468)
+Flink's RPC framework is now based on Apache Pekko instead of Akka. Any Akka 
dependencies were removed.
+
+ Introduce Runtime Filter for Flink Batch Jobs
+# 

Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360483461


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 
+Caution is advised when using the CLI, as it is not recommended to switch back 
and forth between these two options.
+
+ Introduce Flink Jdbc Driver For Sql Gateway
+# [FLINK-31496](https://issues.apache.org/jira/browse/FLINK-31496)
+Apache Flink now supports JDBC driver to access sql-gateway, you can use the 
driver in any cases that
+support standard JDBC extension to connect to Flink cluster.
+
+ Extend watermark-related features for SQL
+# [FLINK-31535](https://issues.apache.org/jira/browse/FLINK-31535)
+Flink now enables user config watermark emit strategy/watermark 
alignment/watermark idle-timeout
+in Flink sql job with dynamic table options and 'Options' hint.

Review Comment:
   ```suggestion
   in Flink sql job with dynamic table options and `OPTIONS` hint.
   ```
   usually hints are referenced in doc in uppercase + added code marker



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360482517


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 
+any changes made to sql-client.display.max-column-width will be automatically 
transferred to table.display.max-column-width. 

Review Comment:
   ```suggestion
   any changes made to `sql-client.display.max-column-width` will be 
automatically transferred to `table.display.max-column-width`. 
   ```
   just add code marker



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33116) CliClientTest.testCancelExecutionInteractiveMode fails with NPE on AZP

2023-10-16 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-33116:
-

Assignee: Jiabao Sun

> CliClientTest.testCancelExecutionInteractiveMode fails with NPE on AZP
> --
>
> Key: FLINK-33116
> URL: https://issues.apache.org/jira/browse/FLINK-33116
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Jiabao Sun
>Priority: Critical
>  Labels: pull-request-available, stale-critical, test-stability
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png
>
>
> This build 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53309=logs=ce3801ad-3bd5-5f06-d165-34d37e757d90=5e4d9387-1dcc-5885-a901-90469b7e6d2f=12264]
> fails as
> {noformat}
> Sep 18 02:26:15 02:26:15.743 [ERROR] 
> org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode
>   Time elapsed: 0.1 s  <<< ERROR!
> Sep 18 02:26:15 java.lang.NullPointerException
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClient.closeTerminal(CliClient.java:284)
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClient.close(CliClient.java:108)
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode(CliClientTest.java:314)
> Sep 18 02:26:15   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33116) CliClientTest.testCancelExecutionInteractiveMode fails with NPE on AZP

2023-10-16 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-33116:
-

Assignee: (was: Matthias Pohl)

> CliClientTest.testCancelExecutionInteractiveMode fails with NPE on AZP
> --
>
> Key: FLINK-33116
> URL: https://issues.apache.org/jira/browse/FLINK-33116
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, stale-critical, test-stability
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png
>
>
> This build 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53309=logs=ce3801ad-3bd5-5f06-d165-34d37e757d90=5e4d9387-1dcc-5885-a901-90469b7e6d2f=12264]
> fails as
> {noformat}
> Sep 18 02:26:15 02:26:15.743 [ERROR] 
> org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode
>   Time elapsed: 0.1 s  <<< ERROR!
> Sep 18 02:26:15 java.lang.NullPointerException
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClient.closeTerminal(CliClient.java:284)
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClient.close(CliClient.java:108)
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode(CliClientTest.java:314)
> Sep 18 02:26:15   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33116) CliClientTest.testCancelExecutionInteractiveMode fails with NPE on AZP

2023-10-16 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-33116:
-

Assignee: Matthias Pohl

> CliClientTest.testCancelExecutionInteractiveMode fails with NPE on AZP
> --
>
> Key: FLINK-33116
> URL: https://issues.apache.org/jira/browse/FLINK-33116
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: pull-request-available, stale-critical, test-stability
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png
>
>
> This build 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53309=logs=ce3801ad-3bd5-5f06-d165-34d37e757d90=5e4d9387-1dcc-5885-a901-90469b7e6d2f=12264]
> fails as
> {noformat}
> Sep 18 02:26:15 02:26:15.743 [ERROR] 
> org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode
>   Time elapsed: 0.1 s  <<< ERROR!
> Sep 18 02:26:15 java.lang.NullPointerException
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClient.closeTerminal(CliClient.java:284)
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClient.close(CliClient.java:108)
> Sep 18 02:26:15   at 
> org.apache.flink.table.client.cli.CliClientTest.testCancelExecutionInteractiveMode(CliClientTest.java:314)
> Sep 18 02:26:15   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360482132


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 
+As sqlClient relies on the Table API underneath, and both sqlClient and the 
Table API serve distinct 
+and isolated scenarios, it is a rational choice to maintain a centralized 
configuration. 
+This approach also simplifies matters for users, as they only need to manage 
one configOption for display control.
+
+During the migration phase, while sql-client.display.max-column-width is 
deprecated, 

Review Comment:
   ```suggestion
   During the migration phase, while `sql-client.display.max-column-width` is 
deprecated, 
   ```
   just add code marker



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360481680


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 

Review Comment:
   ```suggestion
   Introduction of the new ConfigOption `DISPLAY_MAX_COLUMN_WIDTH` 
(`table.display.max-column-width`) 
   ```
   just add code marker



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33274][release] Add release note for version 1.18 [flink]

2023-10-16 Thread via GitHub


snuyanzin commented on code in PR #23527:
URL: https://github.com/apache/flink/pull/23527#discussion_r1360481156


##
docs/content.zh/release-notes/flink-1.18.md:
##
@@ -0,0 +1,152 @@
+---
+title: "Release Notes - Flink 1.18"
+---
+
+
+# Release notes - Flink 1.18
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.17 and Flink 1.18. Please read these notes 
carefully if you are
+planning to upgrade your Flink version to 1.18.
+
+
+### Build System
+
+ Support Java 17 (LTS)
+
+# [FLINK-15736](https://issues.apache.org/jira/browse/FLINK-15736)
+Apache Flink was made ready to compile and run with Java 17 (LTS). This 
feature is still in beta mode. 
+Issues should be reported in Flink's bug tracker.
+
+
+### Table API & SQL
+
+ Unified the max display column width for SqlClient and Table APi in both 
Streaming and Batch execMode
+
+# [FLINK-30025](https://issues.apache.org/jira/browse/FLINK-30025)
+Introduction of the new ConfigOption DISPLAY_MAX_COLUMN_WIDTH 
(table.display.max-column-width) 
+in the TableConfigOptions class is now in place. 
+This option is utilized when displaying table results through the Table API 
and sqlClient. 

Review Comment:
   ```suggestion
   This option is utilized when displaying table results through the Table API 
and SQL Client. 
   ```
   I noticed in docs that most of the time it is either `SQL Client` or `sql 
client`, probably would make sense to follow same principle 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-16 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1360479261


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##
@@ -124,23 +158,70 @@ private void handleDeploymentFailure(ExecutionVertex 
executionVertex, JobExcepti
 
 @Override
 public void onNewResourcesAvailable() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
 }
 
 @Override
 public void onNewResourceRequirements() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
+}
+
+/** Force rescaling as long as the target parallelism is different from 
the current one. */
+private void forceRescale() {
+if (context.shouldRescale(getExecutionGraph(), true)) {
+getLogger()
+.info(
+"Added resources are still there after {} 
time({}), force a rescale.",
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+context.goToRestarting(
+getExecutionGraph(),
+getExecutionGraphHandler(),
+getOperatorCoordinatorHandler(),
+Duration.ofMillis(0L),
+getFailures());
+}
 }
 
+/**
+ * Rescale the job if {@link Context#shouldRescale} is true. Otherwise, 
force a rescale using
+ * {@link Executing#forceRescale()} after {@link
+ * JobManagerOptions#SCHEDULER_SCALING_INTERVAL_MAX}.
+ */
 private void maybeRescale() {
-if (context.shouldRescale(getExecutionGraph())) {
-getLogger().info("Can change the parallelism of job. Restarting 
job.");
+rescaleScheduled = false;
+if (context.shouldRescale(getExecutionGraph(), false)) {
+getLogger().info("Can change the parallelism of the job. 
Restarting the job.");
 context.goToRestarting(
 getExecutionGraph(),
 getExecutionGraphHandler(),
 getOperatorCoordinatorHandler(),
 Duration.ofMillis(0L),
 getFailures());
+} else if (scalingIntervalMax != null) {
+getLogger()
+.info(
+"The longer the pipeline runs, the more the 
(small) resource gain is worth the restarting time. "
++ "Last resource added does not meet {}, 
force a rescale after {} time({}) if the resource is still there.",
+JobManagerOptions.MIN_PARALLELISM_INCREASE,
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+// schedule a force rescale in 
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX time
+context.runIfState(this, this::forceRescale, scalingIntervalMax);
+}

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-16 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1360474345


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##
@@ -124,23 +158,70 @@ private void handleDeploymentFailure(ExecutionVertex 
executionVertex, JobExcepti
 
 @Override
 public void onNewResourcesAvailable() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
 }
 
 @Override
 public void onNewResourceRequirements() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
+}
+
+/** Force rescaling as long as the target parallelism is different from 
the current one. */
+private void forceRescale() {
+if (context.shouldRescale(getExecutionGraph(), true)) {
+getLogger()
+.info(
+"Added resources are still there after {} 
time({}), force a rescale.",
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+context.goToRestarting(
+getExecutionGraph(),
+getExecutionGraphHandler(),
+getOperatorCoordinatorHandler(),
+Duration.ofMillis(0L),
+getFailures());
+}
 }
 
+/**
+ * Rescale the job if {@link Context#shouldRescale} is true. Otherwise, 
force a rescale using
+ * {@link Executing#forceRescale()} after {@link
+ * JobManagerOptions#SCHEDULER_SCALING_INTERVAL_MAX}.
+ */
 private void maybeRescale() {
-if (context.shouldRescale(getExecutionGraph())) {
-getLogger().info("Can change the parallelism of job. Restarting 
job.");
+rescaleScheduled = false;
+if (context.shouldRescale(getExecutionGraph(), false)) {
+getLogger().info("Can change the parallelism of the job. 
Restarting the job.");
 context.goToRestarting(
 getExecutionGraph(),
 getExecutionGraphHandler(),
 getOperatorCoordinatorHandler(),
 Duration.ofMillis(0L),
 getFailures());
+} else if (scalingIntervalMax != null) {
+getLogger()
+.info(
+"The longer the pipeline runs, the more the 
(small) resource gain is worth the restarting time. "
++ "Last resource added does not meet {}, 
force a rescale after {} time({}) if the resource is still there.",
+JobManagerOptions.MIN_PARALLELISM_INCREASE,
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+// schedule a force rescale in 
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX time
+context.runIfState(this, this::forceRescale, scalingIntervalMax);
+}

Review Comment:
   As you mentioned, we discussed a lot the `scalingIntervalMax` semantics and 
these semantics have changed during these discussions. I have not updated the 
FLIP to reflect these changes.
   That being said, I agree I if `timeSinceLastRescale() > scalingIntervalMax` 
we can run an immediate force rescale. I'll update the FLIP as well
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >