Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


Jiabao-Sun commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428219337


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   Sorry, if I was wrong, please correct me. 
   I still don't quite understand why we need to delete the folder we just 
created immediately.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   `matrix:
   flink_branches: [
   ,{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]`
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   `matrix:
   flink_branches: [
   ,{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   
   `matrix:
   flink_branches: [{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


eskabetxe commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428247514


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
ok..
   I was thinking something like this:
   
   `matrix:
   flink_branches: [,{
 flink: 1.17.1,
 branch: v3.1
   }, {
 flink: 1.18.0,
 branch: v3.1,
 jdk: [ 8, 11, 17 ]
   }]
   jdk: [ 8, 11 ]`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-30593][autoscaler] Improve restart time tracking [flink-kubernetes-operator]

2023-12-15 Thread via GitHub


afedulov opened a new pull request, #735:
URL: https://github.com/apache/flink-kubernetes-operator/pull/735

   This PR contains the following improvements to the restart tracking logic:
   * Adds more debug logs
   * Stores restart Duration directly instead of the endTime Instant
   * Fixes a bug that makes restart duration tracking dependent on whether 
metrics are considered fully collected


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33818] Implement restore tests for WindowDeduplicate node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23923:
URL: https://github.com/apache/flink/pull/23923#discussion_r1428333474


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowDeduplicateTestPrograms.java:
##
@@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.stream;
+
+import org.apache.flink.table.test.program.SinkTestStep;
+import org.apache.flink.table.test.program.SourceTestStep;
+import org.apache.flink.table.test.program.TableTestProgram;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+
+/** {@link TableTestProgram} definitions for testing {@link 
StreamExecWindowDeduplicate}. */
+public class WindowDeduplicateTestPrograms {
+
+static final Row[] BEFORE_DATA = {
+Row.of("2020-04-15 08:00:05", new BigDecimal(4.00), "A", "supplier1"),
+Row.of("2020-04-15 08:00:06", new BigDecimal(4.00), "A", "supplier2"),
+Row.of("2020-04-15 08:00:07", new BigDecimal(2.00), "G", "supplier1"),
+Row.of("2020-04-15 08:00:08", new BigDecimal(2.00), "A", "supplier3"),
+Row.of("2020-04-15 08:00:09", new BigDecimal(5.00), "D", "supplier4"),
+Row.of("2020-04-15 08:00:11", new BigDecimal(2.00), "B", "supplier3"),
+Row.of("2020-04-15 08:00:13", new BigDecimal(1.00), "E", "supplier1"),
+Row.of("2020-04-15 08:00:15", new BigDecimal(3.00), "B", "supplier2"),
+Row.of("2020-04-15 08:00:17", new BigDecimal(6.00), "D", "supplier5")
+};
+
+static final Row[] AFTER_DATA = {
+Row.of("2020-04-15 08:00:21", new BigDecimal(2.00), "B", "supplier7"),
+Row.of("2020-04-15 08:00:23", new BigDecimal(1.00), "A", "supplier4"),
+Row.of("2020-04-15 08:00:25", new BigDecimal(3.00), "C", "supplier3"),
+Row.of("2020-04-15 08:00:28", new BigDecimal(6.00), "A", "supplier8")
+};
+
+static final SourceTestStep SOURCE =
+SourceTestStep.newBuilder("bid_t")
+.addSchema(
+"ts STRING",
+"price DECIMAL(10,2)",
+"item STRING",
+"supplier_id STRING",
+"`bid_time` AS TO_TIMESTAMP(`ts`)",
+"`proc_time` AS PROCTIME()",
+"WATERMARK for `bid_time` AS `bid_time` - INTERVAL 
'1' SECOND")
+.producedBeforeRestore(BEFORE_DATA)
+.producedAfterRestore(AFTER_DATA)
+.build();
+
+static final String[] SINK_SCHEMA = {
+"bid_time TIMESTAMP(3)",
+"price DECIMAL(10,2)",
+"item STRING",
+"supplier_id STRING",
+"window_start TIMESTAMP(3)",
+"window_end TIMESTAMP(3)",
+"row_num BIGINT"
+};
+
+static final String TUMBLE_TVF =
+"TABLE(TUMBLE(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '10' 
SECOND))";
+
+static final String HOP_TVF =
+"TABLE(HOP(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '5' SECOND, 
INTERVAL '10' SECOND))";
+
+static final String CUMULATIVE_TVF =
+"TABLE(CUMULATE(TABLE bid_t, DESCRIPTOR(bid_time), INTERVAL '5' 
SECOND, INTERVAL '10' SECOND))";
+
+static final String ONE_ROW = "row_num <= 1";
+
+static final String N_ROWS = "row_num < 3";

Review Comment:
   Yes, my bad. Will move this to the WindowRank PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#discussion_r1428365706


##
.github/workflows/weekly.yml:
##
@@ -45,6 +45,14 @@ jobs:
   flink: 1.18.0,
   branch: v3.1
 }]
+jdk: [ 8, 11 ]
+include:
+  - flink: 1.18-SNAPSHOT
+branch: main
+jdk: 17

Review Comment:
   seems didn't get it first, sorry
   
   yes, that's possible the only diff that it should be string, otherwise 
sequence in nested fields are not allowed
   Moreover strings allow to get the ci job view in a nicer way



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


snuyanzin commented on PR #82:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/82#issuecomment-1858347970

   looks like gha fails to download flink, will retry a bit later


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32212) Job restarting indefinitely after an IllegalStateException from BlobLibraryCacheManager

2023-12-15 Thread Ricky Saltzer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797288#comment-17797288
 ] 

Ricky Saltzer commented on FLINK-32212:
---

Hitting this same error after moving our jobs from being manually deployed in 
K8s to using ArgoCD. However, our failure is a bit different, as its not 
failing to deploy, but endlessly restarting at a random time after successfully 
running (e.g. 24 hours later). 

> Job restarting indefinitely after an IllegalStateException from 
> BlobLibraryCacheManager
> ---
>
> Key: FLINK-32212
> URL: https://issues.apache.org/jira/browse/FLINK-32212
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.16.1
> Environment: Apache Flink Kubernetes Operator 1.4
>Reporter: Matheus Felisberto
>Priority: Major
>
> After running for a few hours the job starts to throw IllegalStateException 
> and I can't figure out why. To restore the job, I need to manually delete the 
> FlinkDeployment to be recreated and redeploy everything.
> The jar is built-in into the docker image, hence is defined accordingly with 
> the Operator's documentation:
> {code:java}
> // jarURI: local:///opt/flink/usrlib/my-job.jar {code}
> I've tried to move it into /opt/flink/lib/my-job.jar but it didn't work 
> either. 
>  
> {code:java}
> // Source: my-topic (1/2)#30587 
> (b82d2c7f9696449a2d9f4dc298c0a008_bc764cd8ddf7a0cff126f51c16239658_0_30587) 
> switched from DEPLOYING to FAILED with failure cause: 
> java.lang.IllegalStateException: The library registration references a 
> different set of library BLOBs than previous registrations for this job:
> old:[p-5d91888083d38a3ff0b6c350f05a3013632137c6-7237ecbb12b0b021934b0c81aef78396]
> new:[p-5d91888083d38a3ff0b6c350f05a3013632137c6-943737c6790a3ec6870cecd652b956c2]
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$ResolvedClassLoader.verifyClassLoader(BlobLibraryCacheManager.java:419)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$ResolvedClassLoader.access$500(BlobLibraryCacheManager.java:359)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$LibraryCacheEntry.getOrResolveClassLoader(BlobLibraryCacheManager.java:235)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$LibraryCacheEntry.access$1100(BlobLibraryCacheManager.java:202)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$DefaultClassLoaderLease.getOrResolveClassLoader(BlobLibraryCacheManager.java:336)
>     at 
> org.apache.flink.runtime.taskmanager.Task.createUserCodeClassloader(Task.java:1024)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:612)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>     at java.base/java.lang.Thread.run(Unknown Source) {code}
> If there is any other information that can help to identify the problem, 
> please let me know.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33859) Support OpenSearch v2

2023-12-15 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33859:
---

 Summary: Support OpenSearch v2
 Key: FLINK-33859
 URL: https://issues.apache.org/jira/browse/FLINK-33859
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.2.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


The main issue is that in OpenSearch v2 there were several breaking changes 
like 
[https://github.com/opensearch-project/OpenSearch/pull/9082]
[https://github.com/opensearch-project/OpenSearch/pull/5902]

which made current connector version failing while communicating with v2

 

Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33859] Support OpenSearch v2 [flink-connector-opensearch]

2023-12-15 Thread via GitHub


snuyanzin opened a new pull request, #38:
URL: https://github.com/apache/flink-connector-opensearch/pull/38

   The PR adds support for OpenSearch v2
   
   Since there are breking changes introduced in OpenSearch there is no way to 
support one jar working for both v1 and v2.
   For that reason it is now splitted in same way like it is for elastic: one 
jar for v1, another for v2.
   
   Connector name for v1 is same as before - `opensearch`, for v2 it is 
`opensearch-2`.
   
   Since v2 is java 11 based there is a java11 maven profile for v2 which makes 
opensearch connector for v2 building only in case of java 11+.   There are some 
attempts on OpenSearch side to improve this situation, in case of success 
building with java8 for OpenSearch v2 could be easily added by removal of that 
profile.
   
   
   Also PR  bumps dependency for Flink to 1.18.0. The reason is incompatible 
changes for ArchUnit which makes the code passing archunit tests either only 
for 1.17 or only for 1.18., 1.19. 
   
   Also it adds support for java 17
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33859) Support OpenSearch v2

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33859:
---
Labels: pull-request-available  (was: )

> Support OpenSearch v2
> -
>
> Key: FLINK-33859
> URL: https://issues.apache.org/jira/browse/FLINK-33859
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.2.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> The main issue is that in OpenSearch v2 there were several breaking changes 
> like 
> [https://github.com/opensearch-project/OpenSearch/pull/9082]
> [https://github.com/opensearch-project/OpenSearch/pull/5902]
> which made current connector version failing while communicating with v2
>  
> Also it would make sense to add integration and e2e tests to test against v2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33860) Implement restore tests for WindowTableFunction node

2023-12-15 Thread Bonnie Varghese (Jira)
Bonnie Varghese created FLINK-33860:
---

 Summary: Implement restore tests for WindowTableFunction node
 Key: FLINK-33860
 URL: https://issues.apache.org/jira/browse/FLINK-33860
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bonnie Varghese
Assignee: Bonnie Varghese






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33861) Implement restore tests for WindowRank node

2023-12-15 Thread Bonnie Varghese (Jira)
Bonnie Varghese created FLINK-33861:
---

 Summary: Implement restore tests for WindowRank node
 Key: FLINK-33861
 URL: https://issues.apache.org/jira/browse/FLINK-33861
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Bonnie Varghese
Assignee: Bonnie Varghese






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498126


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {

Review Comment:
   This test is covered as part of the WindowJoin restore tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub a

[PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 opened a new pull request, #23936:
URL: https://github.com/apache/flink/pull/23936

   
   
   ## What is the purpose of the change
   
   *Add restore tests for WindowTableFunction node*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - Added restore tests for WindowTableFunction node which verifies the 
generated compiled plan with the saved compiled plan
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33860) Implement restore tests for WindowTableFunction node

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33860:
---
Labels: pull-request-available  (was: )

> Implement restore tests for WindowTableFunction node
> 
>
> Key: FLINK-33860
> URL: https://issues.apache.org/jira/browse/FLINK-33860
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498415


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498672


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498672


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498126


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {

Review Comment:
   This test is covered as part of the WindowJoin restore tests - 
https://github.com/apache/flink/pull/23918



-- 
This is an automated message from the Apache Git Service.
To res

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


bvarghese1 commented on code in PR #23936:
URL: https://github.com/apache/flink/pull/23936#discussion_r1428498415


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowTableFunctionJsonPlanTest.java:
##
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.table.planner.plan.nodes.exec.stream;
-
-import org.apache.flink.table.api.TableConfig;
-import org.apache.flink.table.api.TableEnvironment;
-import org.apache.flink.table.planner.utils.StreamTableTestUtil;
-import org.apache.flink.table.planner.utils.TableTestBase;
-
-import org.junit.jupiter.api.BeforeEach;
-import org.junit.jupiter.api.Test;
-
-/** Test json serialization/deserialization for window table function. */
-class WindowTableFunctionJsonPlanTest extends TableTestBase {
-
-private StreamTableTestUtil util;
-private TableEnvironment tEnv;
-
-@BeforeEach
-void setup() {
-util = streamTestUtil(TableConfig.getDefault());
-tEnv = util.getTableEnv();
-
-String srcTable1Ddl =
-"CREATE TABLE MyTable (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable1Ddl);
-
-String srcTable2Ddl =
-"CREATE TABLE MyTable2 (\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR,\n"
-+ " `rowtime` AS TO_TIMESTAMP(c),\n"
-+ " proctime as PROCTIME(),\n"
-+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL 
'1' SECOND\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(srcTable2Ddl);
-}
-
-@Test
-void testIndividualWindowTVF() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(rowtime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testIndividualWindowTVFProcessingTime() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3),\n"
-+ " window_end TIMESTAMP(3),\n"
-+ " a INT,\n"
-+ " b BIGINT,\n"
-+ " c VARCHAR\n"
-+ ") WITH (\n"
-+ " 'connector' = 'values')\n";
-tEnv.executeSql(sinkTableDdl);
-util.verifyJsonPlan(
-"insert into MySink select\n"
-+ "  window_start,\n"
-+ "  window_end,\n"
-+ "  a,\n"
-+ "  b,\n"
-+ "  c\n"
-+ "FROM TABLE(TUMBLE(TABLE MyTable, 
DESCRIPTOR(proctime), INTERVAL '15' MINUTE))");
-}
-
-@Test
-void testFollowedByWindowJoin() {
-String sinkTableDdl =
-"CREATE TABLE MySink (\n"
-+ " window_start TIMESTAMP(3) NOT NULL,\n"
-+ " window_end TIMESTAMP(3) NOT

Re: [PR] [FLINK-33860] Implement restore tests for WindowTableFunction node [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23936:
URL: https://github.com/apache/flink/pull/23936#issuecomment-1858509241

   
   ## CI report:
   
   * 48235d00b884cb7fbd7704556b8f8014c4350e0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33858) CI fails with No space left on device

2023-12-15 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33858:
---

Assignee: (was: Jing Ge)

> CI fails with No space left on device
> -
>
> Key: FLINK-33858
> URL: https://issues.apache.org/jira/browse/FLINK-33858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Reporter: Sergey Nuyanzin
>Priority: Blocker
>
> AlibabaCI003-agent01
> AlibabaCI003-agent03
> AlibabaCI003-agent05
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=9765]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32949][core]collect tm port binding with TaskManagerOptions [flink]

2023-12-15 Thread via GitHub


JingGe commented on PR #23870:
URL: https://github.com/apache/flink/pull/23870#issuecomment-1858528411

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33611) Support Large Protobuf Schemas

2023-12-15 Thread Sai Sharath Dandi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Sharath Dandi updated FLINK-33611:
--
Summary: Support Large Protobuf Schemas  (was: Add the ability to reuse 
variable names across different split method scopes)

> Support Large Protobuf Schemas
> --
>
> Key: FLINK-33611
> URL: https://issues.apache.org/jira/browse/FLINK-33611
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Assignee: Sai Sharath Dandi
>Priority: Major
>
> h3. Background
> Flink serializes and deserializes protobuf format data by calling the decode 
> or encode method in GeneratedProtoToRow_XXX.java generated by codegen to 
> parse byte[] data into Protobuf Java objects. FLINK-32650 has introduced the 
> ability to split the generated code to improve the performance for large 
> Protobuf schemas. However, this is still not sufficient to support some 
> larger protobuf schemas as the generated code exceeds the java constant pool 
> size [limit|https://en.wikipedia.org/wiki/Java_class_file#The_constant_pool] 
> and we can see errors like "Too many constants" when trying to compile the 
> generated code. 
> *Solution*
> Since we already have the split code functionality already introduced, the 
> main proposal here is to now reuse the variable names across different split 
> method scopes. This will greatly reduce the constant pool size. One more 
> optimization is to only split the last code segment also only when the size 
> exceeds split threshold limit. Currently, the last segment of the generated 
> code is always being split which can lead to too many split methods and thus 
> exceed the constant pool size limit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33704][BP 1.18][Filesytems] Update GCS filesystems to latest available versions [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on PR #23935:
URL: https://github.com/apache/flink/pull/23935#issuecomment-1858552817

   >Unchanged backport of https://github.com/apache/flink/pull/2383
   
   is this a really correct link?
   for me it shows something from 2016...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2023-12-15 Thread via GitHub


sharath1709 opened a new pull request, #23937:
URL: https://github.com/apache/flink/pull/23937

   
   
   ## What is the purpose of the change
   
   This change is made to support large Protobuf schemas
   
   
   ## Brief change log
   
   
   - [FLINK-33611] [flink-protobuf] Reuse variable names across different 
split method scopes in serializer
   - [FLINK-33611] [flink-protobuf] Split last segment only when size 
exceeds split threshold limit in serializer
   - [FLINK-33611] [flink-protobuf] Split last segment only when size 
exceeds split threshold limit in deserializer
   - [FLINK-33611] [flink-protobuf] Reuse variable names across different 
split method scopes in deserializer
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): No
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: No
 - The serializers: Yes
 - The runtime per-record code paths (performance sensitive): No
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: No
 - The S3 file system connector: No
   
   ## Documentation
   
 - Does this pull request introduce a new feature? Yes
 - If yes, how is the feature documented? Not Applicable as the feature is 
fully transparent to users
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33611) Support Large Protobuf Schemas

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33611:
---
Labels: pull-request-available  (was: )

> Support Large Protobuf Schemas
> --
>
> Key: FLINK-33611
> URL: https://issues.apache.org/jira/browse/FLINK-33611
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Assignee: Sai Sharath Dandi
>Priority: Major
>  Labels: pull-request-available
>
> h3. Background
> Flink serializes and deserializes protobuf format data by calling the decode 
> or encode method in GeneratedProtoToRow_XXX.java generated by codegen to 
> parse byte[] data into Protobuf Java objects. FLINK-32650 has introduced the 
> ability to split the generated code to improve the performance for large 
> Protobuf schemas. However, this is still not sufficient to support some 
> larger protobuf schemas as the generated code exceeds the java constant pool 
> size [limit|https://en.wikipedia.org/wiki/Java_class_file#The_constant_pool] 
> and we can see errors like "Too many constants" when trying to compile the 
> generated code. 
> *Solution*
> Since we already have the split code functionality already introduced, the 
> main proposal here is to now reuse the variable names across different split 
> method scopes. This will greatly reduce the constant pool size. One more 
> optimization is to only split the last code segment also only when the size 
> exceeds split threshold limit. Currently, the last segment of the generated 
> code is always being split which can lead to too many split methods and thus 
> exceed the constant pool size limit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23937:
URL: https://github.com/apache/flink/pull/23937#issuecomment-1858559404

   
   ## CI report:
   
   * e474d8114a2328c5d529d8e1084d685906ee502e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33787][jdbc] Java 17 support for jdbc connector [flink-connector-jdbc]

2023-12-15 Thread via GitHub


snuyanzin merged PR #82:
URL: https://github.com/apache/flink-connector-jdbc/pull/82


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Bump org.apache.commons:commons-compress from 1.23.0 to 1.24.0 [flink-connector-jdbc]

2023-12-15 Thread via GitHub


dependabot[bot] opened a new pull request, #84:
URL: https://github.com/apache/flink-connector-jdbc/pull/84

   Bumps org.apache.commons:commons-compress from 1.23.0 to 1.24.0.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.apache.commons:commons-compress&package-manager=maven&previous-version=1.23.0&new-version=1.24.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-jdbc/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33787) Java 17 support for jdbc connector

2023-12-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797330#comment-17797330
 ] 

Sergey Nuyanzin commented on FLINK-33787:
-

Merged as 
[f8de82b4c52a688c5bd36c4c4bd3012ff4081eb8|https://github.com/apache/flink-connector-jdbc/commit/f8de82b4c52a688c5bd36c4c4bd3012ff4081eb8]

> Java 17 support for jdbc connector
> --
>
> Key: FLINK-33787
> URL: https://issues.apache.org/jira/browse/FLINK-33787
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.1.1
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33787) Java 17 support for jdbc connector

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33787.
-
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

> Java 17 support for jdbc connector
> --
>
> Key: FLINK-33787
> URL: https://issues.apache.org/jira/browse/FLINK-33787
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: jdbc-3.1.1
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33779][table] Cleanup usage of deprecated BaseExpressions#cast [flink]

2023-12-15 Thread via GitHub


snuyanzin merged PR #23895:
URL: https://github.com/apache/flink/pull/23895


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin reassigned FLINK-33779:
---

Assignee: Jacky Lau

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33779.
---

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33779.
-
Resolution: Fixed

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33779) Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)

2023-12-15 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797332#comment-17797332
 ] 

Sergey Nuyanzin commented on FLINK-33779:
-

Merged to master as 
[ef2b626d67147797e992ec3b338bafdb4e5ab1c7|https://github.com/apache/flink/commit/ef2b626d67147797e992ec3b338bafdb4e5ab1c7]

> Cleanup usage of deprecated BaseExpressions#cast(TypeInformation)
> -
>
> Key: FLINK-33779
> URL: https://issues.apache.org/jira/browse/FLINK-33779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428571302


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   suddenly.. :see_no_evil: so far I thought it was 
`baseCheckpointPath.toFile.deleteOnExit();`, thanks !
   now changed it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33825) Create a new version in JIRA

2023-12-15 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17797333#comment-17797333
 ] 

Jing Ge commented on FLINK-33825:
-

thanks!

> Create a new version in JIRA
> 
>
> Key: FLINK-33825
> URL: https://issues.apache.org/jira/browse/FLINK-33825
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.18.1
>Reporter: Jing Ge
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.18.1
>
>
> When contributors resolve an issue in JIRA, they are tagging it with a 
> release that will contain their changes. With the release currently underway, 
> new issues should be resolved against a subsequent future release. Therefore, 
> you should create a release item for this subsequent release, as follows:
>  # In JIRA, navigate to the [Flink > Administration > 
> Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
>  # Add a new release: choose the next minor version number compared to the 
> one currently underway, select today’s date as the Start Date, and choose Add.
> (Note: Only PMC members have access to the project administration. If you do 
> not have access, ask on the mailing list for assistance.)
>  
> 
> h3. Expectations
>  * The new version should be listed in the dropdown menu of {{fixVersion}} or 
> {{affectedVersion}} under "unreleased versions" when creating a new Jira 
> issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Make connector being tested against latest OpenSearch 1.x and 2.x [flink-connector-opensearch]

2023-12-15 Thread via GitHub


snuyanzin closed pull request #36: Make connector being tested against latest 
OpenSearch 1.x and 2.x
URL: https://github.com/apache/flink-connector-opensearch/pull/36


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33807] Update JUnit to 5.10.1 [flink]

2023-12-15 Thread via GitHub


snuyanzin commented on code in PR #23917:
URL: https://github.com/apache/flink/pull/23917#discussion_r1428571302


##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/utils/StreamingWithStateTestBase.scala:
##
@@ -68,8 +69,8 @@ class StreamingWithStateTestBase(state: StateBackendMode) 
extends StreamingTestB
 super.before()
 // set state backend
 
-// subfolder are managed here because the tests could fail during cleanup 
when concurrently executed (see FLINK-33820)
-baseCheckpointPath = TempDirUtils.newFolder(tempFolder)
+val baseCheckpointPath = 
Files.createTempDirectory(getClass.getCanonicalName)
+Files.deleteIfExists(baseCheckpointPath);
 state match {

Review Comment:
   suddenly.. :see_no_evil: so far I thought it was 
`baseCheckpointPath.toFile.deleteOnExit();`, thanks and sorry about that!
   now changed it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2023-12-15 Thread via GitHub


sharath1709 commented on PR #23937:
URL: https://github.com/apache/flink/pull/23937#issuecomment-1858595168

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33859] Support OpenSearch v2 [flink-connector-opensearch]

2023-12-15 Thread via GitHub


snuyanzin commented on PR #38:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/38#issuecomment-1858607663

   I'm curius whether multirelease jar supports cases when for the same Flink 
cluster there is a necessity to use both Opensearch v1 and OpenSearch v2 
connectors and both are built with jdk11 for instance?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33565) The concurrentExceptions doesn't work

2023-12-15 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33565:

Fix Version/s: 1.19.0

> The concurrentExceptions doesn't work
> -
>
> Key: FLINK-33565
> URL: https://issues.apache.org/jira/browse/FLINK-33565
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
> Fix For: 1.19.0
>
>
> First of all, thanks to [~mapohl] for helping double-check in advance that 
> this was indeed a bug .
> Displaying exception history in WebUI is supported in FLINK-6042.
> h1. What's the concurrentExceptions?
> When an execution fails due to an exception, other executions in the same 
> region will also restart, and the first Exception is rootException. If other 
> restarted executions also report Exception at this time, we hope to collect 
> these exceptions and Displayed to the user as concurrentExceptions.
> h2. What's this bug?
> The concurrentExceptions is always empty in production, even if other 
> executions report exception at very close times.
> h1. Why doesn't it work?
> If one job has all-to-all shuffle, this job only has one region, and this 
> region has a lot of executions. If one execution throw exception:
>  * JobMaster will mark the state as FAILED for this execution.
>  * The rest of executions of this region will be marked to CANCELING.
>  ** This call stack can be found at FLIP-364 
> [part-4.2.3|https://cwiki.apache.org/confluence/display/FLINK/FLIP-364%3A+Improve+the+restart-strategy#FLIP364:Improvetherestartstrategy-4.2.3Detailedcodeforfull-failover]
>  
> When these executions throw exception as well, it JobMaster will mark the 
> state from CANCELING to CANCELED instead of FAILED.
> The CANCELED execution won't call FAILED logic, so their exceptions are 
> ignored.
> Note: all reports are executed inside of JobMaster RPC thread, it's single 
> thread. So these reports are executed serially. So only one execution is 
> marked to FAILED, and the rest of executions will be marked to CANCELED later.
> h1. How to fix it?
> Offline discuss with [~mapohl] , we need to discuss with community should we 
> keep the concurrentExceptions first.
>  * If no, we can remove related logic directly
>  * If yew, we discuss how to fix it later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33862) Flink Unit Test Failures on 1.18.0

2023-12-15 Thread Prabhu Joseph (Jira)
Prabhu Joseph created FLINK-33862:
-

 Summary: Flink Unit Test Failures on 1.18.0
 Key: FLINK-33862
 URL: https://issues.apache.org/jira/browse/FLINK-33862
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: Prabhu Joseph


Flink Unit Test Failures on 1.18.0. There are 100+ unit test cases failing due 
to below common issues.

*Issue 1*
{code:java}
./mvnw -DfailIfNoTests=false -Dmaven.test.failure.ignore=true 
-Dtest=ExecutionPlanAfterExecutionTest test

[INFO] Running org.apache.flink.client.program.ExecutionPlanAfterExecutionTest
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1267)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$1(ClassLoadingUtils.java:93)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at 
java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at 
org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at 
org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
at 
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
at 
scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
at 
org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
at 
org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at 
org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
at 
org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at 
org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110)
at 
org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59)
at 
org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$Pe

[jira] [Created] (FLINK-33863) Compressed Operator state restore failed

2023-12-15 Thread Ruibin Xing (Jira)
Ruibin Xing created FLINK-33863:
---

 Summary: Compressed Operator state restore failed
 Key: FLINK-33863
 URL: https://issues.apache.org/jira/browse/FLINK-33863
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.18.0
Reporter: Ruibin Xing


We encountered an issue when using Flink 1.18.0. Our job enabled Snapshot 
Compression and used multiple Operator States in an operator. When recovering 
Operator State from a Savepoint, the following error occurred: 
"org.xerial.snappy.SnappyFramedInputStream: encountered EOF while reading 
stream header."

After researching, I believe the error is due to Flink 1.18.0's support for 
Snapshot Compression on Operator State (see 
https://issues.apache.org/jira/browse/FLINK-30113 ). When writing a Savepoint, 
SnappyFramedInputStream adds a header to the beginning of the data. When 
recovering Operator State from a Savepoint, SnappyFramedInputStream verifies 
the header from the beginning of the data.

Currently, when recovering Operator State with Snapshot Compression enabled, 
the logic is as follows:
For each OperatorStateHandle:
1. Verify if the current Savepoint stream's offset is the Snappy header.
2. Seek to the state's start offset.
3. Read the state's data and finally seek to the state's end offset.
(See: 
https://github.com/apache/flink/blob/ef2b626d67147797e992ec3b338bafdb4e5ab1c7/flink-runtime/src/main/java/org/apache/flink/runtime/state/OperatorStateRestoreOperation.java#L172
 )

Furthermore, when there are multiple Operator States, they are not sorted 
according to the Operator State's offset. Therefore, if the Operator States are 
out of order and the final offset is recovered first, the Savepoint stream will 
be seeked to the end, resulting in an EOF error.

I propose a solution: sort the OperatorStateHandle by offset and then recover 
the Operator State in order. After testing, this approach resolves the issue.

I will submit a PR. This is my first time contributing code, so any help is 
really appreciated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33863] Fix restoring compressed operator state [flink]

2023-12-15 Thread via GitHub


ruibinx opened a new pull request, #23938:
URL: https://github.com/apache/flink/pull/23938

   ## What is the purpose of the change
   
   This PR fixes an issue related to restoring multiple operator states when 
snapshot compression is enabled. see: 
https://issues.apache.org/jira/browse/FLINK-30113
   
   
   ## Brief change log
   
   - sort the operator states by offsets before restoring.
   
   
   ## Verifying this change
   still working on tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: yes
 - The S3 file system connector: no
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33863) Compressed Operator state restore failed

2023-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33863:
---
Labels: pull-request-available  (was: )

> Compressed Operator state restore failed
> 
>
> Key: FLINK-33863
> URL: https://issues.apache.org/jira/browse/FLINK-33863
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Ruibin Xing
>Priority: Major
>  Labels: pull-request-available
>
> We encountered an issue when using Flink 1.18.0. Our job enabled Snapshot 
> Compression and used multiple Operator States in an operator. When recovering 
> Operator State from a Savepoint, the following error occurred: 
> "org.xerial.snappy.SnappyFramedInputStream: encountered EOF while reading 
> stream header."
> After researching, I believe the error is due to Flink 1.18.0's support for 
> Snapshot Compression on Operator State (see 
> https://issues.apache.org/jira/browse/FLINK-30113 ). When writing a 
> Savepoint, SnappyFramedInputStream adds a header to the beginning of the 
> data. When recovering Operator State from a Savepoint, 
> SnappyFramedInputStream verifies the header from the beginning of the data.
> Currently, when recovering Operator State with Snapshot Compression enabled, 
> the logic is as follows:
> For each OperatorStateHandle:
> 1. Verify if the current Savepoint stream's offset is the Snappy header.
> 2. Seek to the state's start offset.
> 3. Read the state's data and finally seek to the state's end offset.
> (See: 
> https://github.com/apache/flink/blob/ef2b626d67147797e992ec3b338bafdb4e5ab1c7/flink-runtime/src/main/java/org/apache/flink/runtime/state/OperatorStateRestoreOperation.java#L172
>  )
> Furthermore, when there are multiple Operator States, they are not sorted 
> according to the Operator State's offset. Therefore, if the Operator States 
> are out of order and the final offset is recovered first, the Savepoint 
> stream will be seeked to the end, resulting in an EOF error.
> I propose a solution: sort the OperatorStateHandle by offset and then recover 
> the Operator State in order. After testing, this approach resolves the issue.
> I will submit a PR. This is my first time contributing code, so any help is 
> really appreciated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33863] Fix restoring compressed operator state [flink]

2023-12-15 Thread via GitHub


flinkbot commented on PR #23938:
URL: https://github.com/apache/flink/pull/23938#issuecomment-1858739524

   
   ## CI report:
   
   * 052dab59db8f519765c51228f87277bd85b83a45 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33862) Flink Unit Test Failures on 1.18.0

2023-12-15 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated FLINK-33862:
--
Affects Version/s: 1.19.0

> Flink Unit Test Failures on 1.18.0
> --
>
> Key: FLINK-33862
> URL: https://issues.apache.org/jira/browse/FLINK-33862
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> Flink Unit Test Failures on 1.18.0. There are 100+ unit test cases failing 
> due to below common issues.
> *Issue 1*
> {code:java}
> ./mvnw -DfailIfNoTests=false -Dmaven.test.failure.ignore=true 
> -Dtest=ExecutionPlanAfterExecutionTest test
> [INFO] Running org.apache.flink.client.program.ExecutionPlanAfterExecutionTest
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1267)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$1(ClassLoadingUtils.java:93)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
>   at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
>   at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
>   at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
>   at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
>   at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
>   at 
> org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
>   at 
> org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
>   at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
>   at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
>   at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at 
> org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
>   at 
> org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 
> scala.concurrent.BlockContext$.with

<    1   2