[jira] [Updated] (FLINK-33862) Flink Unit Test Failures on 1.18.0

2023-12-15 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated FLINK-33862:
--
Affects Version/s: 1.19.0

> Flink Unit Test Failures on 1.18.0
> --
>
> Key: FLINK-33862
> URL: https://issues.apache.org/jira/browse/FLINK-33862
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> Flink Unit Test Failures on 1.18.0. There are 100+ unit test cases failing 
> due to below common issues.
> *Issue 1*
> {code:java}
> ./mvnw -DfailIfNoTests=false -Dmaven.test.failure.ignore=true 
> -Dtest=ExecutionPlanAfterExecutionTest test
> [INFO] Running org.apache.flink.client.program.ExecutionPlanAfterExecutionTest
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1267)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$1(ClassLoadingUtils.java:93)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
>   at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
>   at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
>   at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
>   at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
>   at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
>   at 
> org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
>   at 
> org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
>   at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
>   at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
>   at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at 
> org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
>   at 
> org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 
> scala.concurrent.BlockContext$.with

Re: [PR] [FLINK-33863] Fix restoring compressed operator state [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23938:
URL: https://github.com/apache/flink/pull/23938#issuecomment-1858739524

   
   ## CI report:
   
   * 052dab59db8f519765c51228f87277bd85b83a45 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33863) Compressed Operator state restore failed

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33863:
---
Labels: pull-request-available  (was: )

> Compressed Operator state restore failed
> 
>
> Key: FLINK-33863
> URL: https://issues.apache.org/jira/browse/FLINK-33863
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Ruibin Xing
>Priority: Major
>  Labels: pull-request-available
>
> We encountered an issue when using Flink 1.18.0. Our job enabled Snapshot 
> Compression and used multiple Operator States in an operator. When recovering 
> Operator State from a Savepoint, the following error occurred: 
> "org.xerial.snappy.SnappyFramedInputStream: encountered EOF while reading 
> stream header."
> After researching, I believe the error is due to Flink 1.18.0's support for 
> Snapshot Compression on Operator State (see 
> https://issues.apache.org/jira/browse/FLINK-30113 ). When writing a 
> Savepoint, SnappyFramedInputStream adds a header to the beginning of the 
> data. When recovering Operator State from a Savepoint, 
> SnappyFramedInputStream verifies the header from the beginning of the data.
> Currently, when recovering Operator State with Snapshot Compression enabled, 
> the logic is as follows:
> For each OperatorStateHandle:
> 1. Verify if the current Savepoint stream's offset is the Snappy header.
> 2. Seek to the state's start offset.
> 3. Read the state's data and finally seek to the state's end offset.
> (See: 
> https://github.com/apache/flink/blob/ef2b626d67147797e992ec3b338bafdb4e5ab1c7/flink-runtime/src/main/java/org/apache/flink/runtime/state/OperatorStateRestoreOperation.java#L172
>  )
> Furthermore, when there are multiple Operator States, they are not sorted 
> according to the Operator State's offset. Therefore, if the Operator States 
> are out of order and the final offset is recovered first, the Savepoint 
> stream will be seeked to the end, resulting in an EOF error.
> I propose a solution: sort the OperatorStateHandle by offset and then recover 
> the Operator State in order. After testing, this approach resolves the issue.
> I will submit a PR. This is my first time contributing code, so any help is 
> really appreciated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33863] Fix restoring compressed operator state [flink]

2023-12-15 Thread via GitHub


ruibinx opened a new pull request, #23938:
URL: https://github.com/apache/flink/pull/23938

   ## What is the purpose of the change
   
   This PR fixes an issue related to restoring multiple operator states when 
snapshot compression is enabled. see: 
https://issues.apache.org/jira/browse/FLINK-30113
   
   
   ## Brief change log
   
   - sort the operator states by offsets before restoring.
   
   
   ## Verifying this change
   still working on tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: yes
 - The S3 file system connector: no
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33863) Compressed Operator state restore failed

2023-12-15 Thread Ruibin Xing (Jira)
Ruibin Xing created FLINK-33863:
---

 Summary: Compressed Operator state restore failed
 Key: FLINK-33863
 URL: https://issues.apache.org/jira/browse/FLINK-33863
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.18.0
Reporter: Ruibin Xing


We encountered an issue when using Flink 1.18.0. Our job enabled Snapshot 
Compression and used multiple Operator States in an operator. When recovering 
Operator State from a Savepoint, the following error occurred: 
"org.xerial.snappy.SnappyFramedInputStream: encountered EOF while reading 
stream header."

After researching, I believe the error is due to Flink 1.18.0's support for 
Snapshot Compression on Operator State (see 
https://issues.apache.org/jira/browse/FLINK-30113 ). When writing a Savepoint, 
SnappyFramedInputStream adds a header to the beginning of the data. When 
recovering Operator State from a Savepoint, SnappyFramedInputStream verifies 
the header from the beginning of the data.

Currently, when recovering Operator State with Snapshot Compression enabled, 
the logic is as follows:
For each OperatorStateHandle:
1. Verify if the current Savepoint stream's offset is the Snappy header.
2. Seek to the state's start offset.
3. Read the state's data and finally seek to the state's end offset.
(See: 
https://github.com/apache/flink/blob/ef2b626d67147797e992ec3b338bafdb4e5ab1c7/flink-runtime/src/main/java/org/apache/flink/runtime/state/OperatorStateRestoreOperation.java#L172
 )

Furthermore, when there are multiple Operator States, they are not sorted 
according to the Operator State's offset. Therefore, if the Operator States are 
out of order and the final offset is recovered first, the Savepoint stream will 
be seeked to the end, resulting in an EOF error.

I propose a solution: sort the OperatorStateHandle by offset and then recover 
the Operator State in order. After testing, this approach resolves the issue.

I will submit a PR. This is my first time contributing code, so any help is 
really appreciated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33862) Flink Unit Test Failures on 1.18.0

2023-12-15 Thread Prabhu Joseph (Jira)
Prabhu Joseph created FLINK-33862:
-

 Summary: Flink Unit Test Failures on 1.18.0
 Key: FLINK-33862
 URL: https://issues.apache.org/jira/browse/FLINK-33862
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: Prabhu Joseph


Flink Unit Test Failures on 1.18.0. There are 100+ unit test cases failing due 
to below common issues.

*Issue 1*
{code:java}
./mvnw -DfailIfNoTests=false -Dmaven.test.failure.ignore=true 
-Dtest=ExecutionPlanAfterExecutionTest test

[INFO] Running org.apache.flink.client.program.ExecutionPlanAfterExecutionTest
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1267)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$1(ClassLoadingUtils.java:93)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at 
org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
at 
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
at 
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
at 
org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
at 
org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at 
org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
at 
org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at 
org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110)
at 
org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59)
at 
org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$Pe

[jira] [Updated] (FLINK-33565) The concurrentExceptions doesn't work

2023-12-15 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33565:

Fix Version/s: 1.19.0

> The concurrentExceptions doesn't work
> -
>
> Key: FLINK-33565
> URL: https://issues.apache.org/jira/browse/FLINK-33565
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.19.0
>
>
> First of all, thanks to [~mapohl] for helping double-check in advance that 
> this was indeed a bug .
> Displaying exception history in WebUI is supported in FLINK-6042.
> h1. What's the concurrentExceptions?
> When an execution fails due to an exception, other executions in the same 
> region will also restart, and the first Exception is rootException. If other 
> restarted executions also report Exception at this time, we hope to collect 
> these exceptions and Displayed to the user as concurrentExceptions.
> h2. What's this bug?
> The concurrentExceptions is always empty in production, even if other 
> executions report exception at very close times.
> h1. Why doesn't it work?
> If one job has all-to-all shuffle, this job only has one region, and this 
> region has a lot of executions. If one execution throw exception:
>  * JobMaster will mark the state as FAILED for this execution.
>  * The rest of executions of this region will be marked to CANCELING.
>  ** This call stack can be found at FLIP-364 
> [part-4.2.3|https://cwiki.apache.org/confluence/display/FLINK/FLIP-364%3A+Improve+the+restart-strategy#FLIP364:Improvetherestartstrategy-4.2.3Detailedcodeforfull-failover]
>  
> When these executions throw exception as well, it JobMaster will mark the 
> state from CANCELING to CANCELED instead of FAILED.
> The CANCELED execution won't call FAILED logic, so their exceptions are 
> ignored.
> Note: all reports are executed inside of JobMaster RPC thread, it's single 
> thread. So these reports are executed serially. So only one execution is 
> marked to FAILED, and the rest of executions will be marked to CANCELED later.
> h1. How to fix it?
> Offline discuss with [~mapohl] , we need to discuss with community should we 
> keep the concurrentExceptions first.
>  * If no, we can remove related logic directly
>  * If yew, we discuss how to fix it later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33859] Support OpenSearch v2 [flink-connector-opensearch]

2023-12-15 Thread via GitHub


snuyanzin commented on PR #38:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/38#issuecomment-1858607663

   I'm curius whether multirelease jar supports cases when for the same Flink 
cluster there is a necessity to use both Opensearch v1 and OpenSearch v2 
connectors and both are built with jdk11 for instance?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2023-12-15 Thread via GitHub


sharath1709 commented on PR #23937:
URL: https://github.com/apache/flink/pull/23937#issuecomment-1858595168

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428571302


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   suddenly.. :see_no_evil: so far I thought it was 
`baseCheckpointPath.toFile.deleteOnExit();`, thanks and sorry about that!
   now changed it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Make connector being tested against latest OpenSearch 1.x and 2.x [flink-connector-opensearch]

2023-12-15 Thread via GitHub


snuyanzin closed pull request #36: Make connector being tested against latest 
OpenSearch 1.x and 2.x
URL: https://github.com/apache/flink-connector-opensearch/pull/36


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33825) Create a new version in JIRA

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797333#comment-17797333
 ] 

Jing Ge commented on FLINK-33825:
-

thanks!

> Create a new version in JIRA
> 
>
> Key: FLINK-33825
> URL: https://issues.apache.org/jira/browse/FLINK-33825
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.18.1
>Reporter: Jing Ge
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.18.1
>
>
> When contributors resolve an issue in JIRA, they are tagging it with a 
> release that will contain their changes. With the release currently underway, 
> new issues should be resolved against a subsequent future release. Therefore, 
> you should create a release item for this subsequent release, as follows:
>  # In JIRA, navigate to the [Flink > Administration > 
> Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
>  # Add a new release: choose the next minor version number compared to the 
> one currently underway, select today’s date as the Start Date, and choose Add.
> (Note: Only PMC members have access to the project administration. If you do 
> not have access, ask on the mailing list for assistance.)
>  
> 
> h3. Expectations
>  * The new version should be listed in the dropdown menu of {{fixVersion}} or 
> {{affectedVersion}} under "unreleased versions" when creating a new Jira 
> issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428571302


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   suddenly.. :see_no_evil: so far I thought it was 
`baseCheckpointPath.toFile.deleteOnExit();`, thanks !
   now changed it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797332#comment-17797332
 ] 

Sergey Nuyanzin commented on FLINK-33779:
-

Merged to master as 
[ef2b626d67147797e992ec3b338bafdb4e5ab1c7|https://github.com/apache/flink/commit/ef2b626d67147797e992ec3b338bafdb4e5ab1c7]

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33779.
-
Resolution: Fixed

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33779.
---

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-33779:
---

Assignee: Jacky Lau

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33779][table] Cleanup usage of deprecated BaseExpressions#cast [flink]

2023-12-15 Thread via GitHub


snuyanzin merged PR #23895:
URL: https://github.com/apache/flink/pull/23895


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33787) Java 17 support for jdbc connector

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33787.
-
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

> Java 17 support for jdbc connector
> --
>
> Key: FLINK-33787
> URL: https://issues.apache.org/jira/browse/FLINK-33787
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.1.1
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33787) Java 17 support for jdbc connector

2023-12-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797330#comment-17797330
 ] 

Sergey Nuyanzin commented on FLINK-33787:
-

Merged as 
[f8de82b4c52a688c5bd36c4c4bd3012ff4081eb8|https://github.com/apache/flink-connector-jdbc/commit/f8de82b4c52a688c5bd36c4c4bd3012ff4081eb8]

> Java 17 support for jdbc connector
> --
>
> Key: FLINK-33787
> URL: https://issues.apache.org/jira/browse/FLINK-33787
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.1.1
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Bump org.apache.commons:commons-compress from 1.23.0 to 1.24.0 [flink-connector-jdbc]

2023-12-15 Thread via GitHub


dependabot[bot] opened a new pull request, #84:
URL: https://github.com/apache/flink-connector-jdbc/pull/84

   Bumps org.apache.commons:commons-compress from 1.23.0 to 1.24.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress&package-manager=maven&previous-version=1.23.0&new-version=1.24.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-jdbc/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


snuyanzin merged PR #82:
URL: https://github.com/apache/flink-connector-jdbc/pull/82


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23937:
URL: https://github.com/apache/flink/pull/23937#issuecomment-1858559404

   
   ## CI report:
   
   * e474d8114a2328c5d529d8e1084d685906ee502e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33611) Support Large Protobuf Schemas

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33611:
---
Labels: pull-request-available  (was: )

> Support Large Protobuf Schemas
> --
>
> Key: FLINK-33611
> URL: https://issues.apache.org/jira/browse/FLINK-33611
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Assignee: Sai Sharath Dandi
>Priority: Major
>  Labels: pull-request-available
>
> h3. Background
> Flink serializes and deserializes protobuf format data by calling the decode 
> or encode method in GeneratedProtoToRow_XXX.java generated by codegen to 
> parse byte[] data into Protobuf Java objects. FLINK-32650 has introduced the 
> ability to split the generated code to improve the performance for large 
> Protobuf schemas. However, this is still not sufficient to support some 
> larger protobuf schemas as the generated code exceeds the java constant pool 
> size [limit|https://en.wikipedia.org/wiki/Java_class_file#The_constant_pool] 
> and we can see errors like "Too many constants" when trying to compile the 
> generated code. 
> *Solution*
> Since we already have the split code functionality already introduced, the 
> main proposal here is to now reuse the variable names across different split 
> method scopes. This will greatly reduce the constant pool size. One more 
> optimization is to only split the last code segment also only when the size 
> exceeds split threshold limit. Currently, the last segment of the generated 
> code is always being split which can lead to too many split methods and thus 
> exceed the constant pool size limit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2023-12-15 Thread via GitHub


sharath1709 opened a new pull request, #23937:
URL: https://github.com/apache/flink/pull/23937

   
   
   ## What is the purpose of the change
   
   This change is made to support large Protobuf schemas
   
   
   ## Brief change log
   
   
   - [FLINK-33611] [flink-protobuf] Reuse variable names across different 
split method scopes in serializer
   - [FLINK-33611] [flink-protobuf] Split last segment only when size 
exceeds split threshold limit in serializer
   - [FLINK-33611] [flink-protobuf] Split last segment only when size 
exceeds split threshold limit in deserializer
   - [FLINK-33611] [flink-protobuf] Reuse variable names across different 
split method scopes in deserializer
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): No
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: No
 - The serializers: Yes
 - The runtime per-record code paths (performance sensitive): No
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: No
 - The S3 file system connector: No
   
   ## Documentation
   
 - Does this pull request introduce a new feature? Yes
 - If yes, how is the feature documented? Not Applicable as the feature is 
fully transparent to users
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33704][BP 1.18][Filesytems] Update GCS filesystems to latest available versions [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on PR #23935:
URL: https://github.com/apache/flink/pull/23935#issuecomment-1858552817

   >Unchanged backport of https://github.com/apache/flink/pull/2383
   
   is this a really correct link?
   for me it shows something from 2016...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33611) Support Large Protobuf Schemas

2023-12-15 Thread Sai Sharath Dandi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Sharath Dandi updated FLINK-33611:
--
Summary: Support Large Protobuf Schemas  (was: Add the ability to reuse 
variable names across different split method scopes)

> Support Large Protobuf Schemas
> --
>
> Key: FLINK-33611
> URL: https://issues.apache.org/jira/browse/FLINK-33611
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Assignee: Sai Sharath Dandi
>Priority: Major
>
> h3. Background
> Flink serializes and deserializes protobuf format data by calling the decode 
> or encode method in GeneratedProtoToRow_XXX.java generated by codegen to 
> parse byte[] data into Protobuf Java objects. FLINK-32650 has introduced the 
> ability to split the generated code to improve the performance for large 
> Protobuf schemas. However, this is still not sufficient to support some 
> larger protobuf schemas as the generated code exceeds the java constant pool 
> size [limit|https://en.wikipedia.org/wiki/Java_class_file#The_constant_pool] 
> and we can see errors like "Too many constants" when trying to compile the 
> generated code. 
> *Solution*
> Since we already have the split code functionality already introduced, the 
> main proposal here is to now reuse the variable names across different split 
> method scopes. This will greatly reduce the constant pool size. One more 
> optimization is to only split the last code segment also only when the size 
> exceeds split threshold limit. Currently, the last segment of the generated 
> code is always being split which can lead to too many split methods and thus 
> exceed the constant pool size limit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32949][core]collect tm port binding with TaskManagerOptions [flink]

2023-12-15 Thread via GitHub


JingGe commented on PR #23870:
URL: https://github.com/apache/flink/pull/23870#issuecomment-1858528411

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33858:
---

Assignee: (was: Jing Ge)

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23936:
URL: https://github.com/apache/flink/pull/23936#issuecomment-1858509241

   
   ## CI report:
   
   * 48235d00b884cb7fbd7704556b8f8014c4350e0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498415


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498126


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {

Review Comment:
   This test is covered as part of the WindowJoin restore tests - 
https://github.com/apache/flink/pull/23918



-- 
This is an automated message from the Apache Git Service.
To res

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498672


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498672


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498415


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

[jira] [Updated] (FLINK-33860) Implement restore tests for WindowTableFunction node

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33860:
---
Labels: pull-request-available  (was: )

> Implement restore tests for WindowTableFunction node
> 
>
> Key: FLINK-33860
> URL: https://issues.apache.org/jira/browse/FLINK-33860
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 opened a new pull request, #23936:
URL: https://github.com/apache/flink/pull/23936

   
   
   ## What is the purpose of the change
   
   *Add restore tests for WindowTableFunction node*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - Added restore tests for WindowTableFunction node which verifies the 
generated compiled plan with the saved compiled plan
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498126


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {

Review Comment:
   This test is covered as part of the WindowJoin restore tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub a

[jira] [Created] (FLINK-33861) Implement restore tests for WindowRank node

2023-12-15 Thread Bonnie Varghese (Jira)
Bonnie Varghese created FLINK-33861:
---

 Summary: Implement restore tests for WindowRank node
 Key: FLINK-33861
 URL: https://issues.apache.org/jira/browse/FLINK-33861
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bonnie Varghese
Assignee: Bonnie Varghese






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33860) Implement restore tests for WindowTableFunction node

2023-12-15 Thread Bonnie Varghese (Jira)
Bonnie Varghese created FLINK-33860:
---

 Summary: Implement restore tests for WindowTableFunction node
 Key: FLINK-33860
 URL: https://issues.apache.org/jira/browse/FLINK-33860
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bonnie Varghese
Assignee: Bonnie Varghese






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33859) Support OpenSearch v2

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33859:
---
Labels: pull-request-available  (was: )

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.2.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33859] Support OpenSearch v2 [flink-connector-opensearch]

2023-12-15 Thread via GitHub


snuyanzin opened a new pull request, #38:
URL: https://github.com/apache/flink-connector-opensearch/pull/38

   The PR adds support for OpenSearch v2
   
   Since there are breking changes introduced in OpenSearch there is no way to 
support one jar working for both v1 and v2.
   For that reason it is now splitted in same way like it is for elastic: one 
jar for v1, another for v2.
   
   Connector name for v1 is same as before - `opensearch`, for v2 it is 
`opensearch-2`.
   
   Since v2 is java 11 based there is a java11 maven profile for v2 which makes 
opensearch connector for v2 building only in case of java 11+.   There are some 
attempts on OpenSearch side to improve this situation, in case of success 
building with java8 for OpenSearch v2 could be easily added by removal of that 
profile.
   
   
   Also PR  bumps dependency for Flink to 1.18.0. The reason is incompatible 
changes for ArchUnit which makes the code passing archunit tests either only 
for 1.17 or only for 1.18., 1.19. 
   
   Also it adds support for java 17
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33859) Support OpenSearch v2

2023-12-15 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33859:
---

 Summary: Support OpenSearch v2
 Key: FLINK-33859
 URL: https://issues.apache.org/jira/browse/FLINK-33859
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.2.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


The main issue is that in OpenSearch v2 there were several breaking changes 
like 
[https://github.com/opensearch-project/OpenSearch/pull/9082]
[https://github.com/opensearch-project/OpenSearch/pull/5902]

which made current connector version failing while communicating with v2

 

Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32212) Job restarting indefinitely after an IllegalStateException from BlobLibraryCacheManager

2023-12-15 Thread Ricky Saltzer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797288#comment-17797288
 ] 

Ricky Saltzer commented on FLINK-32212:
---

Hitting this same error after moving our jobs from being manually deployed in 
K8s to using ArgoCD. However, our failure is a bit different, as its not 
failing to deploy, but endlessly restarting at a random time after successfully 
running (e.g. 24 hours later). 

> Job restarting indefinitely after an IllegalStateException from 
> BlobLibraryCacheManager
> ---
>
> Key: FLINK-32212
> URL: https://issues.apache.org/jira/browse/FLINK-32212
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.16.1
> Environment: Apache Flink Kubernetes Operator 1.4
>Reporter: Matheus Felisberto
>Priority: Major
>
> After running for a few hours the job starts to throw IllegalStateException 
> and I can't figure out why. To restore the job, I need to manually delete the 
> FlinkDeployment to be recreated and redeploy everything.
> The jar is built-in into the docker image, hence is defined accordingly with 
> the Operator's documentation:
> {code:java}
> // jarURI: local:///opt/flink/usrlib/my-job.jar {code}
> I've tried to move it into /opt/flink/lib/my-job.jar but it didn't work 
> either. 
>  
> {code:java}
> // Source: my-topic (1/2)#30587 
> (b82d2c7f9696449a2d9f4dc298c0a008_bc764cd8ddf7a0cff126f51c16239658_0_30587) 
> switched from DEPLOYING to FAILED with failure cause: 
> java.lang.IllegalStateException: The library registration references a 
> different set of library BLOBs than previous registrations for this job:
> old:[p-5d91888083d38a3ff0b6c350f05a3013632137c6-7237ecbb12b0b021934b0c81aef78396]
> new:[p-5d91888083d38a3ff0b6c350f05a3013632137c6-943737c6790a3ec6870cecd652b956c2]
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$ResolvedClassLoader.verifyClassLoader(BlobLibraryCacheManager.java:419)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$ResolvedClassLoader.access$500(BlobLibraryCacheManager.java:359)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$LibraryCacheEntry.getOrResolveClassLoader(BlobLibraryCacheManager.java:235)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$LibraryCacheEntry.access$1100(BlobLibraryCacheManager.java:202)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$DefaultClassLoaderLease.getOrResolveClassLoader(BlobLibraryCacheManager.java:336)
>     at 
> org.apache.flink.runtime.taskmanager.Task.createUserCodeClassloader(Task.java:1024)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:612)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>     at java.base/java.lang.Thread.run(Unknown Source) {code}
> If there is any other information that can help to identify the problem, 
> please let me know.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


snuyanzin commented on PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#issuecomment-1858347970

   looks like gha fails to download flink, will retry a bit later


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428365706


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
   seems didn't get it first, sorry
   
   yes, that's possible the only diff that it should be string, otherwise 
sequence in nested fields are not allowed
   Moreover strings allow to get the ci job view in a nicer way



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33818] Implement restore tests for WindowDeduplicate node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23923:
URL: https://github.com/apache/flink/pull/23923#discussion_r1428333474


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowDeduplicateTestPrograms.java:
##
@@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowDeduplicate}. */
+public class WindowDeduplicateTestPrograms {
+
+static final Row[] BEFORE_DATA = {
+Row.of("2020-04-15 08:00:05", new BigDecimal(4.00), "A", "supplier1"),
+Row.of("2020-04-15 08:00:06", new BigDecimal(4.00), "A", "supplier2"),
+Row.of("2020-04-15 08:00:07", new BigDecimal(2.00), "G", "supplier1"),
+Row.of("2020-04-15 08:00:08", new BigDecimal(2.00), "A", "supplier3"),
+Row.of("2020-04-15 08:00:09", new BigDecimal(5.00), "D", "supplier4"),
+Row.of("2020-04-15 08:00:11", new BigDecimal(2.00), "B", "supplier3"),
+Row.of("2020-04-15 08:00:13", new BigDecimal(1.00), "E", "supplier1"),
+Row.of("2020-04-15 08:00:15", new BigDecimal(3.00), "B", "supplier2"),
+Row.of("2020-04-15 08:00:17", new BigDecimal(6.00), "D", "supplier5")
+};
+
+static final Row[] AFTER_DATA = {
+Row.of("2020-04-15 08:00:21", new BigDecimal(2.00), "B", "supplier7"),
+Row.of("2020-04-15 08:00:23", new BigDecimal(1.00), "A", "supplier4"),
+Row.of("2020-04-15 08:00:25", new BigDecimal(3.00), "C", "supplier3"),
+Row.of("2020-04-15 08:00:28", new BigDecimal(6.00), "A", "supplier8")
+};
+
+static final SourceTestStep SOURCE =
+SourceTestStep.newBuilder("bid_t")
+.addSchema(
+"ts STRING",
+"price DECIMAL(10,2)",
+"item STRING",
+"supplier_id STRING",
+"`bid_time` AS TO_TIMESTAMP(`ts`)",
+"`proc_time` AS PROCTIME()",
+"WATERMARK for `bid_time` AS `bid_time` - INTERVAL 
'1' SECOND")
+.producedBeforeRestore(BEFORE_DATA)
+.producedAfterRestore(AFTER_DATA)
+.build();
+
+static final String[] SINK_SCHEMA = {
+"bid_time TIMESTAMP(3)",
+"price DECIMAL(10,2)",
+"item STRING",
+"supplier_id STRING",
+"window_start TIMESTAMP(3)",
+"window_end TIMESTAMP(3)",
+"row_num BIGINT"
+};
+
+static final String TUMBLE_TVF =
+"TABLE(TUMBLE(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '10' 
SECOND))";
+
+static final String HOP_TVF =
+"TABLE(HOP(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '5' SECOND, 
INTERVAL '10' SECOND))";
+
+static final String CUMULATIVE_TVF =
+"TABLE(CUMULATE(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '5' 
SECOND, INTERVAL '10' SECOND))";
+
+static final String ONE_ROW = "row_num <= 1";
+
+static final String N_ROWS = "row_num < 3";

Review Comment:
   Yes, my bad. Will move this to the WindowRank PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-30593][autoscaler] Improve restart time tracking [flink-kubernetes-operator]

2023-12-15 Thread via GitHub


afedulov opened a new pull request, #735:
URL: https://github.com/apache/flink-kubernetes-operator/pull/735

   This PR contains the following improvements to the restart tracking logic:
   * Adds more debug logs
   * Stores restart Duration directly instead of the endTime Instant
   * Fixes a bug that makes restart duration tracking dependent on whether 
metrics are considered fully collected


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   
   `matrix:
   flink_branches: [,{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   
   `matrix:
   flink_branches: [{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   `matrix:
   flink_branches: [
   ,{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   `matrix:
   flink_branches: [
   ,{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]`
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


Jiabao-Sun commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428219337


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   Sorry, if I was wrong, please correct me. 
   I still don't quite understand why we need to delete the folder we just 
created immediately.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


Jiabao-Sun commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428209269


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);

Review Comment:
   It's ok to use `Files.createTempDirectory(getClass.getCanonicalName)`.
   `TempDirUtils.newFolder(tempFolder)` may still cause problems, it's my fault 
of PR 33641.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428204897


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   Since `baseCheckpointPath` is used only in this method I do not see any 
reason to have it as a field, so I just moved it completely to the method
   
   since there `deleteIfExists` i think there is no need for 
`ileUtils.deleteDirectory(baseCheckpointPath) m` so it could be removed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428204897


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   Since `baseCheckpointPath` is used only in this method I do not see any 
reason to have it as a field, so I just moved it completely to the method



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428199378


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);

Review Comment:
   >baseCheckpointPath = Files.createTempDirectory("junit")
   
   I didn't get why can't we use `baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)` ?
   Havig classname in prefix could simplify finding root cause in case of 
issues IMHO
   
   The idea is to use `Files.createTempDirectory` and `deleteIfExists` to be 
sure that delete is happening at the end and it should not intersect with 
checkpoint removal
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


Jiabao-Sun commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428188735


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);

Review Comment:
   ```suggestion
   baseCheckpointPath = Files.createTempDirectory("junit")
   ```
   
   
   I noticed that previous failed stack trace:
   ```
   Suppressed: java.nio.file.NoSuchFileException: 
/tmp/junit8233404746490819295/junit2880192188533757139/71dc52714210ccdbd137bbcffa7955b6/chk-3
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at 
java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.next(FileTreeWalker.java:372)
at java.nio.file.Files.walkFileTree(Files.java:2706)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.deleteAllFilesAndDirectories(TempDirectory.java:329)
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath.close(TempDirectory.java:310)
... 96 more
Suppressed: java.nio.file.NoSuchFileException: 
/tmp/junit8233404746490819295/junit2880192188533757139/71dc52714210ccdbd137bbcffa7955b6/chk-3
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.resetPermissionsAndTryToDeleteAgain(TempDirectory.java:382)
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.visitFileFailed(TempDirectory.java:342)
at 
org.junit.jupiter.engine.extension.TempDirectory$CloseablePath$1.visitFileFailed(TempDirectory.java:329)
at java.nio.file.Files.walkFileTree(Files.java:2672)
... 99 more
   ```
   
   The problem seems to have some clues. The failure to junit clean up 
`TempDir` may be caused by the attempt to delete it concurrently with the 
`CheckpointCoordinator` cleaning up completed Checkpoints.
   
   
https://github.com/apache/flink/blob/45f966e8c3c5e903b3843391874f7d2478122d8c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1387-L1390
   
   We can use `baseCheckpointPath = Files.createTempDirectory("junit")` to 
avoid using junit to clean up the `TempForlder`. This will be safe for 
upgrading junit to version 5.10.1.
   
   By the way, the root cause of the previous deletion failed is likely to be 
caused by the automatic cleaning complete checkpoint  by 
`Checkpointcoordinator`. I believe we can solve it soon. 🤔️



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [ FLINK-33603][FileSystems] shade guava in gs-fs filesystem [flink]

2023-12-15 Thread via GitHub


singhravidutt commented on PR #23489:
URL: https://github.com/apache/flink/pull/23489#issuecomment-1858146657

   Licensing issue is fixed in the latest commit but I still feel we should 
exclude guava specific jars coming from `flink-fs-hadoop-shaded`. @JingGe 
@MartijnVisser 
   
   https://github.com/apache/flink/pull/23489#discussion_r1428174024


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [ FLINK-33603][FileSystems] shade guava in gs-fs filesystem [flink]

2023-12-15 Thread via GitHub


singhravidutt commented on code in PR #23489:
URL: https://github.com/apache/flink/pull/23489#discussion_r1428174024


##
flink-filesystems/flink-gs-fs-hadoop/pom.xml:
##
@@ -188,6 +212,29 @@ under the License.
shade


+   
+   
+   
org.apache.flink:flink-fs-hadoop-shaded
+   


Review Comment:
   ```
   [WARNING] flink-fs-hadoop-shaded-1.19-SNAPSHOT.jar, guava-32.1.2-jre.jar 
define 1837 overlapping classes: 
   [WARNING]   - com.google.common.annotations.Beta
   [WARNING]   - com.google.common.annotations.GwtCompatible
   [WARNING]   - com.google.common.annotations.GwtIncompatible
   [WARNING]   - com.google.common.annotations.VisibleForTesting
   [WARNING]   - com.google.common.base.Absent
   [WARNING]   - com.google.common.base.AbstractIterator
   [WARNING]   - com.google.common.base.AbstractIterator$1
   [WARNING]   - com.google.common.base.AbstractIterator$State
   [WARNING]   - com.google.common.base.Ascii
   [WARNING]   - com.google.common.base.CaseFormat
   [WARNING]   - 1827 more...
   ```
   I see this while building the package. My interpretation if it is that 
because `flink-fs-hadoop-shaded` is shaded jar AND it's not relocating guava 
classes. Shaded jar contains classes of guava. Hence just excluding guava as 
transitive dependency from module:`flink-fs-hadoop-shaded` is not enough.
   
   `flink-gs-fs-hadoop` will contain two implementation of guava classes i.e. 
`com.google.common.*` one coming from `flink-fs-hadoop-shaded` which will be 
from guava version `v27.1` and other from guava `v32.1.2`. As  
`fun:buildOrThrow` is not available in with `v27.` is causes runtime failure.
   
   Hence we have to either repackage every dependency of 
`flink-fs-hadoop-shaded` and then add as a dependency or exclude the jars 
manually.
   
   What are your thoughts on that?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797237#comment-17797237
 ] 

Jing Ge edited comment on FLINK-33858 at 12/15/23 4:19 PM:
---

I don't have the access rights and need to reach out to the right one, WIP...


was (Author: jingge):
I don't have the access rightd and need to reach out to the right one, WIP...

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797237#comment-17797237
 ] 

Jing Ge edited comment on FLINK-33858 at 12/15/23 4:12 PM:
---

I don't have the access rightd and need to reach out the right one, WIP...


was (Author: jingge):
I don't have the access right and need to reach out the right one, WIP...

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797237#comment-17797237
 ] 

Jing Ge edited comment on FLINK-33858 at 12/15/23 4:12 PM:
---

I don't have the access rightd and need to reach out to the right one, WIP...


was (Author: jingge):
I don't have the access rightd and need to reach out the right one, WIP...

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797237#comment-17797237
 ] 

Jing Ge commented on FLINK-33858:
-

I don't have the access right and need to reach out the right one, WIP...

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33858:
---

Assignee: Jing Ge

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33704][BP 1.18][Filesytems] Update GCS filesystems to latest available versions [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23935:
URL: https://github.com/apache/flink/pull/23935#issuecomment-1858069389

   
   ## CI report:
   
   * a6d11e1c44779d0f2a0f5f1901323999283410a7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] DO_NOT_MERGE_YET[FLINK-33268][rest] Skip unknown fields in REST response deserialization [flink]

2023-12-15 Thread via GitHub


gaborgsomogyi commented on code in PR #23930:
URL: https://github.com/apache/flink/pull/23930#discussion_r1428109606


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java:
##
@@ -632,12 +633,16 @@ private static  
CompletableFuture parseResponse(
 CompletableFuture responseFuture = new CompletableFuture<>();
 final JsonParser jsonParser = 
objectMapper.treeAsTokens(rawResponse.json);
 try {
+if (responseType.getRawClass().equals(EmptyResponseBody.class)
+&& !rawResponse.getJson().isEmpty()) {
+throw new IOException("Expecting empty response but presumably 
error arrived");
+}

Review Comment:
   This has been added since the approval to resolve the unit test issues 
[here](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55522&view=results).
 I think we need to revisit the approach.
   
   In short error parsing was triggered by failed response parsing without this 
change. With this change we're not blowing up when `EmptyResponseBody` is 
expected because the object mapper can read the nothing successfully. Because 
of this I've added the edge case here.
   
   Important to say that when the response contains fields and any of them are 
missing then the parsing throws exception so we're fine.
   
   The approach is negotiable because there are multiple options. I've tried 
many but this makes the most sense to me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33704][BP 1.18][Filesytems] Update GCS filesystems to latest available versions [flink]

2023-12-15 Thread via GitHub


MartijnVisser opened a new pull request, #23935:
URL: https://github.com/apache/flink/pull/23935

   Unchanged backport of https://github.com/apache/flink/pull/2383
   
   Verified locally with `mvn clean package -DskipTests -Pfast 
-Pskip-webui-build -U -T4`, changing `flink-conf.yaml` to include:
   
   ```
   state.backend.type: rocksdb
   state.checkpoints.dir: gs://mvisser-flink-gcs-test/flink-snapshots
   state.backend.incremental: true
   ```
   
   And then run a job on this cluster


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33541) RAND_INTEGER can't be existed in a IF statement

2023-12-15 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li resolved FLINK-33541.

Fix Version/s: 1.19.0
   1.18.1
   1.17.3
 Assignee: xuyang
   Resolution: Fixed

Fixed via:
master(1.19.0): 45f966e8c3c5e903b3843391874f7d2478122d8c
1.18.1: d60818150005661006a71e4155fc605d7543362b
1.17.3: 58f8162613d9f615e60fb0c9e23692d25469d6f0

Thanks [~xuyangzhong] for the fix and thanks [~dianer17] for reporting the 
issue.

> RAND_INTEGER  can't be existed in a IF statement
> 
>
> Key: FLINK-33541
> URL: https://issues.apache.org/jira/browse/FLINK-33541
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.17.0, 1.18.0
>Reporter: Guojun Li
>Assignee: xuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
> Attachments: image-2023-11-24-13-31-21-209.png
>
>
> The minimum produce steps:
> Flink SQL> select if(1=1, rand_integer(100), 0);
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.Exception: Unsupported operand types: IF(boolean, INT, INT NOT NULL)
>  
> But we do not see the exception reported in 1.14, not sure which version this 
> bug was introduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33541][table-planner] function RAND and RAND_INTEGER should return type nullable if the arguments are nullable [flink]

2023-12-15 Thread via GitHub


libenchao closed pull request #23779: [FLINK-33541][table-planner] function 
RAND and RAND_INTEGER should return type nullable if the arguments are nullable
URL: https://github.com/apache/flink/pull/23779


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33853) [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module

2023-12-15 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797224#comment-17797224
 ] 

Rui Fan commented on FLINK-33853:
-

Merged to master 1.19 via 1136ed50311a18c0b5773ae982330cc2936eba3d

> [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes 
> of runtime module
> --
>
> Key: FLINK-33853
> URL: https://issues.apache.org/jira/browse/FLINK-33853
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33853) [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module

2023-12-15 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-33853.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes 
> of runtime module
> --
>
> Key: FLINK-33853
> URL: https://issues.apache.org/jira/browse/FLINK-33853
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33853][runtime][JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module [flink]

2023-12-15 Thread via GitHub


1996fanrui merged PR #23932:
URL: https://github.com/apache/flink/pull/23932


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33853) [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module

2023-12-15 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-33853:
---

Assignee: RocMarshal

> [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes 
> of runtime module
> --
>
> Key: FLINK-33853
> URL: https://issues.apache.org/jira/browse/FLINK-33853
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797222#comment-17797222
 ] 

Jing Ge commented on FLINK-33858:
-

Thanks for the heads-up, checking...

 

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33857) Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic

2023-12-15 Thread Peter Schulz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schulz updated FLINK-33857:
-
Description: 
There's [a bug|https://github.com/elastic/elasticsearch/issues/103406] in 
elasticsearch that may lead to underestimated bulk request sizes. We are hit by 
this bug and therefore, I would like to propose a simple improvement to work 
around it: 

{{ElasticsearchWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush, inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)

  was:
There's [a bug|https://github.com/elastic/elasticsearch/issues/103406] in 
elasticsearch that may lead to underestimated bulk request sizes. We are hit by 
this bug and therefore, I would like to propose a simple improvement to work 
around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush, inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)


> Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic
> -
>
> Key: FLINK-33857
> URL: https://issues.apache.org/jira/browse/FLINK-33857
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: elasticsearch-3.0.1
>Reporter: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
>
> There's [a bug|https://github.com/elastic/elasticsearch/issues/103406] in 
> elasticsearch that may lead to underestimated bulk request sizes. We are hit 
> by this bug and therefore, I would like to propose a simple improvement to 
> work around it: 
> {{ElasticsearchWriter}} uses {{RequestIndexer}} as a facade for ES' 
> {{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. 
> Under the hood, the {{BulkProcessor}} takes care of flushing bulk requests to 
> the server when any of the configurable limits is hit (there's another bug: 
> flushing will happen _after_ the limit has exceeded).
> (+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
> {{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
> when to flush, inside the emitter.
> I created a [pull 
> request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
> Feedback is highly welcome. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] test request [flink]

2023-12-15 Thread via GitHub


singhravidutt commented on PR #23933:
URL: https://github.com/apache/flink/pull/23933#issuecomment-1858012528

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33858:

Description: 
AlibabaCI003-agent01
AlibabaCI003-agent03
AlibabaCI003-agent05
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797208#comment-17797208
 ] 

Sergey Nuyanzin commented on FLINK-33858:
-

[~jingge]  could you please have a look since you are one of thos who has 
enough grants to look at it

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33764] Track Heap usage and GC pressure to avoid unnecessary scaling [flink-kubernetes-operator]

2023-12-15 Thread via GitHub


mxm commented on PR #726:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/726#issuecomment-1857974906

   Thanks for the ping. Looks good!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33858:
---

 Summary: CI fails with No space left on device
 Key: FLINK-33858
 URL: https://issues.apache.org/jira/browse/FLINK-33858
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Sergey Nuyanzin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33688][client] Reuse connections in RestClient to save connection establish time [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23934:
URL: https://github.com/apache/flink/pull/23934#issuecomment-1857948911

   
   ## CI report:
   
   * 877cfe2e3147c893829fd944ed5eba157c2f029a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33688) Reuse Channels in RestClient to save connection establish time

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33688:
---
Labels: pull-request-available  (was: )

> Reuse Channels in RestClient to save connection establish time
> --
>
> Key: FLINK-33688
> URL: https://issues.apache.org/jira/browse/FLINK-33688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: xiangyu feng
>Priority: Major
>  Labels: pull-request-available
>
> RestClient can reuse the connections to Dispatcher when submitting http 
> requests to a long running Flink cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33688][client] Reuse connections in RestClient to save connection establish time [flink]

2023-12-15 Thread via GitHub


xiangyuf opened a new pull request, #23934:
URL: https://github.com/apache/flink/pull/23934

   ## What is the purpose of the change
   
   Reuse connections in RestClient to save connection establish time.
   
   ## Brief change log
   
 - Add connection reuse options
 - Reuse channels in RestClient
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Unit Test
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33704) Update GCS filesystems to latest available versions

2023-12-15 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser resolved FLINK-33704.

Resolution: Fixed

> Update GCS filesystems to latest available versions
> ---
>
> Key: FLINK-33704
> URL: https://issues.apache.org/jira/browse/FLINK-33704
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, FileSystems
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Update GS SDK from 2.15.0 to 2.29.1 and GS Hadoop Connector from 2.2.15 to 
> 2.2.18



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [ FLINK-33603][FileSystems] shade guava in gs-fs filesystem [flink]

2023-12-15 Thread via GitHub


MartijnVisser commented on PR #23489:
URL: https://github.com/apache/flink/pull/23489#issuecomment-1857914683

   There are two possibilities to solve this:
   
   1. Backport https://issues.apache.org/jira/browse/FLINK-33704 to 
`release-1.18` - This resolves the issue for 1.18
   2. Merge the commit in my updated PR 
https://github.com/apache/flink/pull/23920/commits/7c609007959cfdc2d3e6bd9634c1576b437a611a
 that excludes Guava from `flink-fs-hadoop-shaded` and includes it, together 
with the needed `com.google.guava:failureaccess` for `flink-gs-fs-hadoop`. 
   
   Both of these approaches make checkpointing work again, I've validated that 
locally. In `master` (so the upcoming Flink 1.19.0 release) this problem does 
not appear.
   
   @JingGe @singhravidutt  @snuyanzin Any preference? I'm leaning towards 1, 
since that would be the same solution for both 1.18 and 1.19, with option 2 
being a specific fix for 1.18 only. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33857) Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic

2023-12-15 Thread Peter Schulz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schulz updated FLINK-33857:
-
Description: 
There's a bug in elasticsearch that may lead to underestimated bulk request 
sizes. We are hit by this bug and therefore, I would like to propose a simple 
improvement to work around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush, inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)

  was:
There's a bug in elasticsearch that may lead to underestimated bulk request 
sizes. We are hit by this bug and therefore, I would like to propose a simple 
improvement to work around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)


> Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic
> -
>
> Key: FLINK-33857
> URL: https://issues.apache.org/jira/browse/FLINK-33857
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: elasticsearch-3.0.1
>Reporter: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
>
> There's a bug in elasticsearch that may lead to underestimated bulk request 
> sizes. We are hit by this bug and therefore, I would like to propose a simple 
> improvement to work around it: 
> {{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
> {{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. 
> Under the hood, the {{BulkProcessor}} takes care of flushing bulk requests to 
> the server when any of the configurable limits is hit (there's another bug: 
> flushing will happen _after_ the limit has exceeded).
> (+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
> {{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
> when to flush, inside the emitter.
> I created a [pull 
> request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
> Feedback is highly welcome. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33857) Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic

2023-12-15 Thread Peter Schulz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schulz updated FLINK-33857:
-
Description: 
There's [a bug|https://github.com/elastic/elasticsearch/issues/103406] in 
elasticsearch that may lead to underestimated bulk request sizes. We are hit by 
this bug and therefore, I would like to propose a simple improvement to work 
around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush, inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)

  was:
There's a bug in elasticsearch that may lead to underestimated bulk request 
sizes. We are hit by this bug and therefore, I would like to propose a simple 
improvement to work around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush, inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)


> Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic
> -
>
> Key: FLINK-33857
> URL: https://issues.apache.org/jira/browse/FLINK-33857
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: elasticsearch-3.0.1
>Reporter: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
>
> There's [a bug|https://github.com/elastic/elasticsearch/issues/103406] in 
> elasticsearch that may lead to underestimated bulk request sizes. We are hit 
> by this bug and therefore, I would like to propose a simple improvement to 
> work around it: 
> {{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
> {{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. 
> Under the hood, the {{BulkProcessor}} takes care of flushing bulk requests to 
> the server when any of the configurable limits is hit (there's another bug: 
> flushing will happen _after_ the limit has exceeded).
> (+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
> {{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
> when to flush, inside the emitter.
> I created a [pull 
> request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
> Feedback is highly welcome. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33857) Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic

2023-12-15 Thread Peter Schulz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schulz updated FLINK-33857:
-
Description: 
There's a bug in elasticsearch that may lead to underestimated bulk request 
sizes. We are hit by this bug and therefore, I would like to propose a simple 
improvement to work around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush inside the emitter.

I created a [pull 
request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
Feedback is highly welcome. :)

  was:
There's a bug in elasticsearch that may lead to underestimated bulk request 
sizes. We are hit by this bug and therefore, I would like to propose a simple 
improvement to work around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush inside the emitter.


> Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic
> -
>
> Key: FLINK-33857
> URL: https://issues.apache.org/jira/browse/FLINK-33857
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: elasticsearch-3.0.1
>Reporter: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
>
> There's a bug in elasticsearch that may lead to underestimated bulk request 
> sizes. We are hit by this bug and therefore, I would like to propose a simple 
> improvement to work around it: 
> {{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
> {{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. 
> Under the hood, the {{BulkProcessor}} takes care of flushing bulk requests to 
> the server when any of the configurable limits is hit (there's another bug: 
> flushing will happen _after_ the limit has exceeded).
> (+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
> {{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
> when to flush inside the emitter.
> I created a [pull 
> request|https://github.com/apache/flink-connector-elasticsearch/pull/85]. 
> Feedback is highly welcome. :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33857) Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33857:
---
Labels: pull-request-available  (was: )

> Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic
> -
>
> Key: FLINK-33857
> URL: https://issues.apache.org/jira/browse/FLINK-33857
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: elasticsearch-3.0.1
>Reporter: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
>
> There's a bug in elasticsearch that may lead to underestimated bulk request 
> sizes. We are hit by this bug and therefore, I would like to propose a simple 
> improvement to work around it: 
> {{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
> {{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. 
> Under the hood, the {{BulkProcessor}} takes care of flushing bulk requests to 
> the server when any of the configurable limits is hit (there's another bug: 
> flushing will happen _after_ the limit has exceeded).
> (+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
> {{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
> when to flush inside the emitter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33857) Expose BulkProcessor.flush via RequestIndexer to allow custom flush logic

2023-12-15 Thread Peter Schulz (Jira)
Peter Schulz created FLINK-33857:


 Summary: Expose BulkProcessor.flush via RequestIndexer to allow 
custom flush logic
 Key: FLINK-33857
 URL: https://issues.apache.org/jira/browse/FLINK-33857
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Affects Versions: elasticsearch-3.0.1
Reporter: Peter Schulz


There's a bug in elasticsearch that may lead to underestimated bulk request 
sizes. We are hit by this bug and therefore, I would like to propose a simple 
improvement to work around it: 

{{ElasticsearchSinkWriter}} uses {{RequestIndexer}} as a facade for ES' 
{{BulkProcessor}}. As of now, only the {{add(…)}} methods are delegated. Under 
the hood, the {{BulkProcessor}} takes care of flushing bulk requests to the 
server when any of the configurable limits is hit (there's another bug: 
flushing will happen _after_ the limit has exceeded).

(+) *Proposal:* Expose {{BulkProcessor.flush()}} via 
{{RequestIndexer.flush()}}. This way we can easily implement logic to decide 
when to flush inside the emitter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33764] Track Heap usage and GC pressure to avoid unnecessary scaling [flink-kubernetes-operator]

2023-12-15 Thread via GitHub


1996fanrui commented on PR #726:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/726#issuecomment-1857868649

   Thanks @gyfora for the ping. 
   
   I don't have any comments, and I have approved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33764] Track Heap usage and GC pressure to avoid unnecessary scaling [flink-kubernetes-operator]

2023-12-15 Thread via GitHub


gyfora commented on PR #726:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/726#issuecomment-1857864814

   Do you have any further comments / concerns @1996fanrui @mxm ?
   
   The thresholds have been adjusted so by default we do not ever block, and we 
plan to iterate and refine the behaviour based on these metrics in the upcoming 
improvements to make the behaviour a bit more graceful :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33818] Implement restore tests for WindowDeduplicate node [flink]

2023-12-15 Thread via GitHub


dawidwys commented on code in PR #23923:
URL: https://github.com/apache/flink/pull/23923#discussion_r1427958337


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowDeduplicateTestPrograms.java:
##
@@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowDeduplicate}. */
+public class WindowDeduplicateTestPrograms {
+
+static final Row[] BEFORE_DATA = {
+Row.of("2020-04-15 08:00:05", new BigDecimal(4.00), "A", "supplier1"),
+Row.of("2020-04-15 08:00:06", new BigDecimal(4.00), "A", "supplier2"),
+Row.of("2020-04-15 08:00:07", new BigDecimal(2.00), "G", "supplier1"),
+Row.of("2020-04-15 08:00:08", new BigDecimal(2.00), "A", "supplier3"),
+Row.of("2020-04-15 08:00:09", new BigDecimal(5.00), "D", "supplier4"),
+Row.of("2020-04-15 08:00:11", new BigDecimal(2.00), "B", "supplier3"),
+Row.of("2020-04-15 08:00:13", new BigDecimal(1.00), "E", "supplier1"),
+Row.of("2020-04-15 08:00:15", new BigDecimal(3.00), "B", "supplier2"),
+Row.of("2020-04-15 08:00:17", new BigDecimal(6.00), "D", "supplier5")
+};
+
+static final Row[] AFTER_DATA = {
+Row.of("2020-04-15 08:00:21", new BigDecimal(2.00), "B", "supplier7"),
+Row.of("2020-04-15 08:00:23", new BigDecimal(1.00), "A", "supplier4"),
+Row.of("2020-04-15 08:00:25", new BigDecimal(3.00), "C", "supplier3"),
+Row.of("2020-04-15 08:00:28", new BigDecimal(6.00), "A", "supplier8")
+};
+
+static final SourceTestStep SOURCE =
+SourceTestStep.newBuilder("bid_t")
+.addSchema(
+"ts STRING",
+"price DECIMAL(10,2)",
+"item STRING",
+"supplier_id STRING",
+"`bid_time` AS TO_TIMESTAMP(`ts`)",
+"`proc_time` AS PROCTIME()",
+"WATERMARK for `bid_time` AS `bid_time` - INTERVAL 
'1' SECOND")
+.producedBeforeRestore(BEFORE_DATA)
+.producedAfterRestore(AFTER_DATA)
+.build();
+
+static final String[] SINK_SCHEMA = {
+"bid_time TIMESTAMP(3)",
+"price DECIMAL(10,2)",
+"item STRING",
+"supplier_id STRING",
+"window_start TIMESTAMP(3)",
+"window_end TIMESTAMP(3)",
+"row_num BIGINT"
+};
+
+static final String TUMBLE_TVF =
+"TABLE(TUMBLE(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '10' 
SECOND))";
+
+static final String HOP_TVF =
+"TABLE(HOP(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '5' SECOND, 
INTERVAL '10' SECOND))";
+
+static final String CUMULATIVE_TVF =
+"TABLE(CUMULATE(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '5' 
SECOND, INTERVAL '10' SECOND))";
+
+static final String ONE_ROW = "row_num <= 1";
+
+static final String N_ROWS = "row_num < 3";

Review Comment:
   Is this still converted to `WindowDeduplicate`? I think it tests 
`WindowRank`, doesn't it? The json plans seem to confirm as I can not find the 
`WindowDeduplicate` node there.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Set version to 1.1.0 and not 1.0.1 [flink-connector-shared-utils]

2023-12-15 Thread via GitHub


echauchot merged PR #31:
URL: https://github.com/apache/flink-connector-shared-utils/pull/31


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Set version to 1.1.0 and not 1.0.1 [flink-connector-shared-utils]

2023-12-15 Thread via GitHub


echauchot commented on PR #31:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/31#issuecomment-1857840750

   @snuyanzin regarding [our version 
question](https://github.com/apache/flink-connector-shared-utils/pull/28#issuecomment-1845318521)
 let's not bother @zentol, I found part of the answer in Flink [release 
process](https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release)
 in "Create a new version in JIRA section":  the process is to increment the 
minor version in jira version tag and only PMC members can do so. So I guess we 
shall use the minor version and not the patch version as PMC did. Merging this 
PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


Jiabao-Sun commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1427932231


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   Thanks @snuyanzin for this hard work.
   
   But I am a little confused, why is `baseCheckpointPath` defined as a stack 
variable, so does the object variable `baseCheckpointPath: File = _` have no 
effect anymore?
   
   I suspect that there are still some problems about reuse the `tempFolder` by 
`TempDirUtils.newFolder(tempFolder)` instead of directly using 
`Files.createTempDirectory("junit")`.
   I think the cleanup of `tempFolder` may still fail when 
`FileUtils.deleteDirectory(baseCheckpointPath)` met 
`DirectoryNotEmptyException` because it is possible that there are some threads 
still writing to the folder.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] test request [flink]

2023-12-15 Thread via GitHub


singhravidutt commented on PR #23933:
URL: https://github.com/apache/flink/pull/23933#issuecomment-185775

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33853][runtime][JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module [flink]

2023-12-15 Thread via GitHub


RocMarshal commented on code in PR #23932:
URL: https://github.com/apache/flink/pull/23932#discussion_r1427882696


##
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/DeclarativeSlotPoolBridgeResourceDeclarationTest.java:
##
@@ -141,40 +137,38 @@ public void 
testRequirementsDecreasedOnAllocationTimeout() throws Exception {
 .get();
 
 // waiting for the timeout
-assertThat(
-allocationFuture,
-
FlinkMatchers.futureWillCompleteExceptionally(Duration.ofMinutes(1)));
-
+
assertThatFuture(allocationFuture).failsWithin(Duration.ofMinutes(1));
+Thread.sleep(5L);

Review Comment:
   Nice catch! 
   sorry for it. Maybe it's a dirty line from the cp.
   I'll remove it and check it on the other corresponding PRs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33795] Add a new config to forbid autoscale execution in the configured excluded periods [flink-kubernetes-operator]

2023-12-15 Thread via GitHub


mxm commented on code in PR #728:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/728#discussion_r1426756716


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/utils/AutoScalerUtils.java:
##
@@ -94,4 +99,106 @@ public static boolean excludeVerticesFromScaling(
 conf.set(AutoScalerOptions.VERTEX_EXCLUDE_IDS, new 
ArrayList<>(excludedIds));
 return anyAdded;
 }
+
+/** Quartz doesn't have the invertTimeRange flag so rewrite this method. */
+static boolean isTimeIncluded(CronCalendar cron, long timeInMillis) {
+if (cron.getBaseCalendar() != null
+&& !cron.getBaseCalendar().isTimeIncluded(timeInMillis)) {
+return false;
+} else {
+return cron.getCronExpression().isSatisfiedBy(new 
Date(timeInMillis));
+}
+}
+
+static Optional interpretAsDaily(String subExpression) {
+String[] splits = subExpression.split("-");
+if (splits.length != 2) {
+return Optional.empty();
+}
+try {
+DailyCalendar daily = new DailyCalendar(splits[0], splits[1]);
+daily.setInvertTimeRange(true);
+return Optional.of(daily);
+} catch (Exception e) {
+return Optional.empty();
+}
+}
+
+static Optional interpretAsCron(String subExpression) {
+try {
+return Optional.of(new CronCalendar(subExpression));
+} catch (Exception e) {
+return Optional.empty();

Review Comment:
   I think we should log this exception. Otherwise it is going to be hard to 
figure out what is wrong with the cron string. This all other methods in this 
class which have this pattern.



##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/utils/AutoScalerUtils.java:
##
@@ -94,4 +99,108 @@ public static boolean excludeVerticesFromScaling(
 conf.set(AutoScalerOptions.VERTEX_EXCLUDE_IDS, new 
ArrayList<>(excludedIds));
 return anyAdded;
 }
+
+/** Quartz doesn't have the invertTimeRange flag so rewrite this method. */
+static boolean isTimeIncluded(CronCalendar cron, long timeInMillis) {

Review Comment:
   Can we move these methods to a new class, e.g. `DateUtils` or `CronUtils`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >