Re: [PR] [FLINK-33545][Connectors/Kafka] KafkaSink implementation can cause dataloss during broker issue when not using EXACTLY_ONCE if there's any batching [flink-connector-kafka]
hhktseng commented on PR #70: URL: https://github.com/apache/flink-connector-kafka/pull/70#issuecomment-1899929011 > @hhktseng Can you rebase your PR? @MartijnVisser can you point me to which commit to rebase onto? thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34128] [bugfix] Some jdbc objects cannot be obtained properly in oracle jdbc,Specify the required type explicitly [flink-connector-jdbc]
BlackPigHe commented on PR #91: URL: https://github.com/apache/flink-connector-jdbc/pull/91#issuecomment-1899926723 @snuyanzin Building this test case requires using jdbc to link to oracle, which can be difficult to simulate.Can I simulate with mock data. haha -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]
libenchao commented on code in PR #79: URL: https://github.com/apache/flink-connector-jdbc/pull/79#discussion_r1457562374 ## flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/statement/FieldNamedPreparedStatementImpl.java: ## @@ -178,26 +178,42 @@ public void close() throws SQLException { // public static FieldNamedPreparedStatement prepareStatement( -Connection connection, String sql, String[] fieldNames) throws SQLException { +Connection connection, +String sql, +String[] fieldNames, +String additionalPredicates, +int numberOfDynamicParams) +throws SQLException { checkNotNull(connection, "connection must not be null."); checkNotNull(sql, "sql must not be null."); checkNotNull(fieldNames, "fieldNames must not be null."); -if (sql.contains("?")) { Review Comment: Do we need to remove this check? ## flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunction.java: ## @@ -63,6 +64,9 @@ public class JdbcRowDataLookupFunction extends LookupFunction { private final JdbcRowConverter jdbcRowConverter; private final JdbcRowConverter lookupKeyRowConverter; +private List resolvedPredicates = new ArrayList<>(); +private Serializable[] pushdownParams = new Serializable[0]; Review Comment: These two variable could be `final`. ## flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/statement/FieldNamedPreparedStatementImplTest.java: ## @@ -41,6 +41,10 @@ class FieldNamedPreparedStatementImplTest { private final String[] keyFields = new String[] {"id", "__field_3__"}; private final String tableName = "tbl"; +private final String[] fieldNames2 = +new String[] {"id:", "name", "email", "ts", "field1", "field_2", "__field_3__"}; +private final String[] keyFields2 = new String[] {"id?:", "__field_3__"}; + Review Comment: This is change is not necessary anymore? ## flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcTablePlanTest.java: ## @@ -70,6 +98,51 @@ public void testFilterPushdown() { "SELECT id, time_col, real_col FROM jdbc WHERE id = 91 AND time_col <> TIME '11:11:11' OR double_col >= -1000.23"); } +/** + * Note the join condition is not present in the optimized plan, as it is handled in the JDBC Review Comment: Can you log another Jira to improve this, scan source has this ability already. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34167] add dependence to fit jdk21 [flink-connector-jdbc]
snuyanzin closed pull request #94: [FLINK-34167] add dependence to fit jdk21 URL: https://github.com/apache/flink-connector-jdbc/pull/94 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34167] add dependence to fit jdk21 [flink-connector-jdbc]
snuyanzin commented on PR #94: URL: https://github.com/apache/flink-connector-jdbc/pull/94#issuecomment-1899916737 Please do not create duplicate PR there is already existing one https://github.com/apache/flink-connector-jdbc/pull/93 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-34168) Refactor all callers that using the public Xxx getXxx(String key) and public void setXxx(String key, Xxx value)
Rui Fan created FLINK-34168: --- Summary: Refactor all callers that using the public Xxx getXxx(String key) and public void setXxx(String key, Xxx value) Key: FLINK-34168 URL: https://issues.apache.org/jira/browse/FLINK-34168 Project: Flink Issue Type: Sub-task Components: Runtime / Configuration Reporter: Rui Fan Assignee: Xuannan Su Refactor all callers that using the public Xxx getXxx(String key) and public void setXxx(String key, Xxx value) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API
[ https://issues.apache.org/jira/browse/FLINK-31691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808519#comment-17808519 ] Jacky Lau commented on FLINK-31691: --- hi [~Sergey Nuyanzin] will you help to review this pr again [https://github.com/apache/flink/pull/22745] , it is for long time. and i rebase it to fix conflict > Add MAP_FROM_ENTRIES supported in SQL & Table API > - > > Key: FLINK-31691 > URL: https://issues.apache.org/jira/browse/FLINK-31691 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Jacky Lau >Assignee: Jacky Lau >Priority: Major > Labels: pull-request-available, stale-assigned > Fix For: 1.19.0 > > > map_from_entries(map) - Returns a map created from an arrays of row with two > fields. Note that the number of fields in a row array should be 2 and the key > of a row array should not be null. > Syntax: > map_from_entries(array_of_rows) > Arguments: > array_of_rows: an arrays of row with two fields. > Returns: > Returns a map created from an arrays of row with two fields. Note that the > number of fields in a row array should be 2 and the key of a row array should > not be null. > Returns null if the argument is null > {code:sql} > > SELECT map_from_entries(map[1, 'a', 2, 'b']); > [(1,"a"),(2,"b")]{code} > See also > presto [https://prestodb.io/docs/current/functions/map.html] > spark https://spark.apache.org/docs/latest/api/sql/index.html#map_from_entries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34167) add dependence to fit jdk21
[ https://issues.apache.org/jira/browse/FLINK-34167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808518#comment-17808518 ] blackpighe commented on FLINK-34167: {code:java} Caused by: java.util.concurrent.ExecutionException: Boxed Error 266 at scala.concurrent.impl.Promise$.resolver(Promise.scala:87) 267 at scala.concurrent.impl.Promise$.scala$concurrent$impl$Promise$$resolveTry(Promise.scala:79) 268 at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) 269 at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629) 270 at org.apache.pekko.actor.ActorRef.tell(ActorRef.scala:141) 271 at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:317) 272 at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222) 273 ... 22 more 274Caused by: java.lang.NoClassDefFoundError: javax/activation/UnsupportedDataTypeException 275 at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.createKnownInputChannel(SingleInputGateFactory.java:387) 276 at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.lambda$createInputChannel$2(SingleInputGateFactory.java:353) 277 at org.apache.flink.runtime.shuffle.ShuffleUtils.applyWithShuffleTypeCheck(ShuffleUtils.java:51) 278 at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.createInputChannel(SingleInputGateFactory.java:333) 279 at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.createInputChannelsAndTieredStorageService(SingleInputGateFactory.java:284) 280 at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.create(SingleInputGateFactory.java:204) 281 at org.apache.flink.runtime.io.network.NettyShuffleEnvironment.createInputGates(NettyShuffleEnvironment.java:265) 282 at org.apache.flink.runtime.taskmanager.Task.(Task.java:418) 283 at org.apache.flink.runtime.taskexecutor.TaskExecutor.submitTask(TaskExecutor.java:815) 284 at java.base/java.lang.reflect.Method.invoke(Method.java:568) 285 at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309) 286 at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) 287 at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307) 288 ... 23 more 289Caused by: java.lang.ClassNotFoundException: javax.activation.UnsupportedDataTypeException 290 at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) 291 at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) 292 at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:525) 293 ... 36 more {code} > add dependence to fit jdk21 > --- > > Key: FLINK-34167 > URL: https://issues.apache.org/jira/browse/FLINK-34167 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: jdbc-3.1.1 >Reporter: blackpighe >Priority: Major > Labels: pull-request-available > > add dependence to fit jdk21 > When pipelining jdk 21+flink 1.19, an error occurred with the message > {code:java} > javax.activation.UnsupportedDataTypeException {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34164) [Benchmark] Compilation error since Jan. 16th
[ https://issues.apache.org/jira/browse/FLINK-34164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34164: --- Labels: pull-request-available (was: ) > [Benchmark] Compilation error since Jan. 16th > - > > Key: FLINK-34164 > URL: https://issues.apache.org/jira/browse/FLINK-34164 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Reporter: Zakelly Lan >Assignee: Junrui Li >Priority: Critical > Labels: pull-request-available > Fix For: 1.19.0 > > > An error occured during the benchmark compile: > {code:java} > 13:17:40 [ERROR] > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/warning:[options] > bootstrap class path not set in conjunction with -source 8 > 13:17:40 > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/src/main/java/org/apache/flink/benchmark/StreamGraphUtils.java:38:19: > error: cannot find symbol {code} > It seems related with the FLINK-33980 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34167) add dependence to fit jdk21
[ https://issues.apache.org/jira/browse/FLINK-34167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34167: --- Labels: pull-request-available (was: ) > add dependence to fit jdk21 > --- > > Key: FLINK-34167 > URL: https://issues.apache.org/jira/browse/FLINK-34167 > Project: Flink > Issue Type: Bug > Components: Connectors / JDBC >Affects Versions: jdbc-3.1.1 >Reporter: blackpighe >Priority: Major > Labels: pull-request-available > > add dependence to fit jdk21 > When pipelining jdk 21+flink 1.19, an error occurred with the message > {code:java} > javax.activation.UnsupportedDataTypeException {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33865][runtime] Adding an ITCase to ensure `exponential-delay.attempts-before-reset-backoff` works well [flink]
1996fanrui commented on PR #23942: URL: https://github.com/apache/flink/pull/23942#issuecomment-1899899509 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-34167) add dependence to fit jdk21
blackpighe created FLINK-34167: -- Summary: add dependence to fit jdk21 Key: FLINK-34167 URL: https://issues.apache.org/jira/browse/FLINK-34167 Project: Flink Issue Type: Bug Components: Connectors / JDBC Affects Versions: jdbc-3.1.1 Reporter: blackpighe add dependence to fit jdk21 When pipelining jdk 21+flink 1.19, an error occurred with the message {code:java} javax.activation.UnsupportedDataTypeException {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [hotfix] Add jakarta.activation required after changes in Flink main repo [flink-connector-jdbc]
snuyanzin commented on PR #93: URL: https://github.com/apache/flink-connector-jdbc/pull/93#issuecomment-1899894520 @BlackPigHe i didn't get your comment have you seen the changes proposed within this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33974] Implement the Sink transformation depending on the new SinkV2 interfaces [flink]
pvary commented on PR #24103: URL: https://github.com/apache/flink/pull/24103#issuecomment-1899886919 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34015) Setting `execution.savepoint.ignore-unclaimed-state` does not take effect when passing this parameter by dynamic properties
[ https://issues.apache.org/jira/browse/FLINK-34015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808503#comment-17808503 ] Zakelly Lan commented on FLINK-34015: - {quote}Hi [~Zakelly], I'm reaching out to you as you've mentioned about refactoring CLI in Flink 2.0. This is another example of confusing behvior in the current CLI design: the short command and -D dynamic properties may interact with each other in a confusing way. {quote} [~Zhanghao Chen] Thanks for the reminder! Got it :D > Setting `execution.savepoint.ignore-unclaimed-state` does not take effect > when passing this parameter by dynamic properties > --- > > Key: FLINK-34015 > URL: https://issues.apache.org/jira/browse/FLINK-34015 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.17.0 >Reporter: Renxiang Zhou >Assignee: Renxiang Zhou >Priority: Critical > Labels: ignore-unclaimed-state-invalid, pull-request-available > Attachments: image-2024-01-08-14-22-09-758.png, > image-2024-01-08-14-24-30-665.png > > > We set `execution.savepoint.ignore-unclaimed-state` to true and use -D option > to submit the job, but unfortunately we found the value is still false in > jobmanager log. > Pic 1: we set `execution.savepoint.ignore-unclaimed-state` to true in > submiting job. > !image-2024-01-08-14-22-09-758.png|width=1012,height=222! > Pic 2: The value is still false in jmlog. > !image-2024-01-08-14-24-30-665.png|width=651,height=51! > > Besides, the parameter `execution.savepoint-restore-mode` has the same > problem since when we pass it by -D option. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34164) [Benchmark] Compilation error since Jan. 16th
[ https://issues.apache.org/jira/browse/FLINK-34164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-34164: - Fix Version/s: 1.19.0 > [Benchmark] Compilation error since Jan. 16th > - > > Key: FLINK-34164 > URL: https://issues.apache.org/jira/browse/FLINK-34164 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Reporter: Zakelly Lan >Assignee: Junrui Li >Priority: Critical > Fix For: 1.19.0 > > > An error occured during the benchmark compile: > {code:java} > 13:17:40 [ERROR] > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/warning:[options] > bootstrap class path not set in conjunction with -source 8 > 13:17:40 > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/src/main/java/org/apache/flink/benchmark/StreamGraphUtils.java:38:19: > error: cannot find symbol {code} > It seems related with the FLINK-33980 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34164) [Benchmark] Compilation error since Jan. 16th
[ https://issues.apache.org/jira/browse/FLINK-34164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-34164: - Priority: Critical (was: Major) > [Benchmark] Compilation error since Jan. 16th > - > > Key: FLINK-34164 > URL: https://issues.apache.org/jira/browse/FLINK-34164 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Reporter: Zakelly Lan >Assignee: Junrui Li >Priority: Critical > > An error occured during the benchmark compile: > {code:java} > 13:17:40 [ERROR] > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/warning:[options] > bootstrap class path not set in conjunction with -source 8 > 13:17:40 > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/src/main/java/org/apache/flink/benchmark/StreamGraphUtils.java:38:19: > error: cannot find symbol {code} > It seems related with the FLINK-33980 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34166) KeyedLookupJoinWrapper incorrectly process delete message for inner join when previous lookup result is empty
lincoln lee created FLINK-34166: --- Summary: KeyedLookupJoinWrapper incorrectly process delete message for inner join when previous lookup result is empty Key: FLINK-34166 URL: https://issues.apache.org/jira/browse/FLINK-34166 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.18.1, 1.17.2 Reporter: lincoln lee Assignee: lincoln lee Fix For: 1.19.0, 1.18.2 KeyedLookupJoinWrapper(when 'table.optimizer.non-deterministic-update.strategy ' is set to 'TRY_RESOLVE' and the lookup join exists NDU problemns) incorrectly process delete message for inner join when previous lookup result is empty The intermediate delete result {code} expectedOutput.add(deleteRecord(3, "c", null, null)); {code} in current case KeyedLookupJoinHarnessTest#testTemporalInnerJoinWithFilterLookupKeyContainsPk is incorrect: {code} @Test public void testTemporalInnerJoinWithFilterLookupKeyContainsPk() throws Exception { OneInputStreamOperatorTestHarness testHarness = createHarness(JoinType.INNER_JOIN, FilterOnTable.WITH_FILTER, true); testHarness.open(); testHarness.processElement(insertRecord(1, "a")); testHarness.processElement(insertRecord(2, "b")); testHarness.processElement(insertRecord(3, "c")); testHarness.processElement(insertRecord(4, "d")); testHarness.processElement(insertRecord(5, "e")); testHarness.processElement(updateBeforeRecord(3, "c")); testHarness.processElement(updateAfterRecord(3, "c2")); testHarness.processElement(deleteRecord(3, "c2")); testHarness.processElement(insertRecord(3, "c3")); List expectedOutput = new ArrayList<>(); expectedOutput.add(insertRecord(1, "a", 1, "Julian")); expectedOutput.add(insertRecord(4, "d", 4, "Fabian")); expectedOutput.add(deleteRecord(3, "c", null, null)); expectedOutput.add(insertRecord(3, "c2", 6, "Jark-2")); expectedOutput.add(deleteRecord(3, "c2", 6, "Jark-2")); expectedOutput.add(insertRecord(3, "c3", 9, "Jark-3")); assertor.assertOutputEquals("output wrong.", expectedOutput, testHarness.getOutput()); testHarness.close(); } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34165) It seems that Apache download link has been changed
[ https://issues.apache.org/jira/browse/FLINK-34165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Ge updated FLINK-34165: Description: For example, the link [https://www.apache.org/dist/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc][1] worked previously now redirect to a list page which leads to a wrong flink.tgz.asc with HTML instead of expected signature. !image-2024-01-19-07-55-07-775.png! The link should be replace with [https://downloads.apache.org/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc] [1] [https://github.com/apache/flink-docker/blob/627987997ca7ec86bcc3d80b26df58aa595b91af/1.17/scala_2.12-java11-ubuntu/Dockerfile#L48C19-L48C101] was: The link [https://www.apache.org/dist/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc][1] worked previously now redirect to a list page which leads to a wrong flink.tgz.asc with HTML instead of expected signature. !image-2024-01-19-07-55-07-775.png! The link should be replace with https://downloads.apache.org/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc [1] https://github.com/apache/flink-docker/blob/627987997ca7ec86bcc3d80b26df58aa595b91af/1.17/scala_2.12-java11-ubuntu/Dockerfile#L48C19-L48C101 > It seems that Apache download link has been changed > --- > > Key: FLINK-34165 > URL: https://issues.apache.org/jira/browse/FLINK-34165 > Project: Flink > Issue Type: Bug > Components: flink-docker >Affects Versions: 1.15.4, 1.16.3, 1.17.2, 1.18.1 >Reporter: Jing Ge >Priority: Major > Attachments: image-2024-01-19-07-55-07-775.png > > > For example, the link > [https://www.apache.org/dist/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc][1] > worked previously now redirect to a list page which leads to a wrong > flink.tgz.asc with HTML instead of expected signature. > !image-2024-01-19-07-55-07-775.png! > The link should be replace with > [https://downloads.apache.org/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc] > > [1] > [https://github.com/apache/flink-docker/blob/627987997ca7ec86bcc3d80b26df58aa595b91af/1.17/scala_2.12-java11-ubuntu/Dockerfile#L48C19-L48C101] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34165) It seems that Apache download link has been changed
Jing Ge created FLINK-34165: --- Summary: It seems that Apache download link has been changed Key: FLINK-34165 URL: https://issues.apache.org/jira/browse/FLINK-34165 Project: Flink Issue Type: Bug Components: flink-docker Affects Versions: 1.18.1, 1.17.2, 1.16.3, 1.15.4 Reporter: Jing Ge Attachments: image-2024-01-19-07-55-07-775.png The link [https://www.apache.org/dist/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc][1] worked previously now redirect to a list page which leads to a wrong flink.tgz.asc with HTML instead of expected signature. !image-2024-01-19-07-55-07-775.png! The link should be replace with https://downloads.apache.org/flink/flink-1.17.2/flink-1.17.2-bin-scala_2.12.tgz.asc [1] https://github.com/apache/flink-docker/blob/627987997ca7ec86bcc3d80b26df58aa595b91af/1.17/scala_2.12-java11-ubuntu/Dockerfile#L48C19-L48C101 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [hotfix] Add jakarta.activation required after changes in Flink main repo [flink-connector-jdbc]
BlackPigHe commented on PR #93: URL: https://github.com/apache/flink-connector-jdbc/pull/93#issuecomment-1899867428 I also found this problem, and found that the work flow kept failing when mr -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [hotfix] Add jakarta.activation required after changes in Flink main repo [flink-connector-jdbc]
BlackPigHe commented on PR #93: URL: https://github.com/apache/flink-connector-jdbc/pull/93#issuecomment-1899866113 javax.activation activation 1.1.1 test 增加这个包就可以啦 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.
[ https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808494#comment-17808494 ] Sergey Nuyanzin commented on FLINK-34135: - [~jingge] I'm not sure that this is working... it looks working for PRs build however I still see lots of failures for currently running nightlies e.g. https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=675bf62c-8558-587e-2555-dcad13acefb5 > A number of ci failures with Access to the path > '.../_work/_temp/containerHandlerInvoker.js' is denied. > --- > > Key: FLINK-34135 > URL: https://issues.apache.org/jira/browse/FLINK-34135 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Reporter: Sergey Nuyanzin >Assignee: Jeyhun Karimov >Priority: Blocker > Labels: test-stability > > There is a number of builds failing with something like > {noformat} > ##[error]Access to the path > '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied. > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33768] Support dynamic source parallelism inference for batch jobs [flink]
SinBex commented on PR #24087: URL: https://github.com/apache/flink/pull/24087#issuecomment-1899864181 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.
[ https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808492#comment-17808492 ] Jing Ge commented on FLINK-34135: - CI works again, I will close this ticket. [~mapohl] [~Sergey Nuyanzin] could you please double confirm? > A number of ci failures with Access to the path > '.../_work/_temp/containerHandlerInvoker.js' is denied. > --- > > Key: FLINK-34135 > URL: https://issues.apache.org/jira/browse/FLINK-34135 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Reporter: Sergey Nuyanzin >Assignee: Jeyhun Karimov >Priority: Blocker > Labels: test-stability > > There is a number of builds failing with something like > {noformat} > ##[error]Access to the path > '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied. > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-34156) Move Flink Calcite rules from Scala to Java
[ https://issues.apache.org/jira/browse/FLINK-34156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808489#comment-17808489 ] Sergey Nuyanzin edited comment on FLINK-34156 at 1/19/24 6:45 AM: -- Thanks for volunteering currently this is only MVP activity meaning that the main part is aiming for later (2.0 ) as also mentioned at confluence page was (Author: sergey nuyanzin): Thanks for volunteering currently this is only MVP activity meaning that the main part is aiming for later (2.0 ) > Move Flink Calcite rules from Scala to Java > --- > > Key: FLINK-34156 > URL: https://issues.apache.org/jira/browse/FLINK-34156 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 2.0.0 > > > This is an umbrella task for migration of Calcite rules from Scala to Java > mentioned at https://cwiki.apache.org/confluence/display/FLINK/2.0+Release > The reason is that since 1.28.0 ( CALCITE-4787 - Move core to use Immutables > instead of ImmutableBeans ) Calcite started to use Immutables > (https://immutables.github.io/) and since 1.29.0 removed ImmutableBeans ( > CALCITE-4839 - Remove remnants of ImmutableBeans post 1.28 release ). All > rule configuration related api which is not Immutables based is marked as > deprecated. Since Immutables implies code generation while java compilation > it is seems impossible to use for rules in Scala code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34156) Move Flink Calcite rules from Scala to Java
[ https://issues.apache.org/jira/browse/FLINK-34156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808489#comment-17808489 ] Sergey Nuyanzin commented on FLINK-34156: - Thanks for volunteering currently this is only MVP activity meaning that the main part is aiming for later (2.0 ) > Move Flink Calcite rules from Scala to Java > --- > > Key: FLINK-34156 > URL: https://issues.apache.org/jira/browse/FLINK-34156 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 2.0.0 > > > This is an umbrella task for migration of Calcite rules from Scala to Java > mentioned at https://cwiki.apache.org/confluence/display/FLINK/2.0+Release > The reason is that since 1.28.0 ( CALCITE-4787 - Move core to use Immutables > instead of ImmutableBeans ) Calcite started to use Immutables > (https://immutables.github.io/) and since 1.29.0 removed ImmutableBeans ( > CALCITE-4839 - Remove remnants of ImmutableBeans post 1.28 release ). All > rule configuration related api which is not Immutables based is marked as > deprecated. Since Immutables implies code generation while java compilation > it is seems impossible to use for rules in Scala code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33768] Support dynamic source parallelism inference for batch jobs [flink]
SinBex commented on PR #24087: URL: https://github.com/apache/flink/pull/24087#issuecomment-1899854481 @zhuzhurk Thanks for reviewing! I have fixed the problems you commented. PTAL~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34164) [Benchmark] Compilation error since Jan. 16th
[ https://issues.apache.org/jira/browse/FLINK-34164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808486#comment-17808486 ] Junrui Li commented on FLINK-34164: --- Thanks [~Zakelly] for pointing out, I'll prepare a pr to fix this issue. > [Benchmark] Compilation error since Jan. 16th > - > > Key: FLINK-34164 > URL: https://issues.apache.org/jira/browse/FLINK-34164 > Project: Flink > Issue Type: Bug > Components: Benchmarks >Reporter: Zakelly Lan >Assignee: Junrui Li >Priority: Major > > An error occured during the benchmark compile: > {code:java} > 13:17:40 [ERROR] > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/warning:[options] > bootstrap class path not set in conjunction with -source 8 > 13:17:40 > /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/src/main/java/org/apache/flink/benchmark/StreamGraphUtils.java:38:19: > error: cannot find symbol {code} > It seems related with the FLINK-33980 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34007) Flink Job stuck in suspend state after losing leadership in HA Mode
[ https://issues.apache.org/jira/browse/FLINK-34007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808485#comment-17808485 ] Gyula Fora commented on FLINK-34007: [~wangyang0918] the tests failed. The executor service (single threaded) previously was only used to execute Flink side logic and now we had to pass it to the LeaderElector itself as well so a single thread kind of deadlocked it somehow. So I increased to 3 and it made it work. Yesterday I started to think that it may actually be a reason why we see ConfigMap version conficts (and lost leaderships) in the first place. This is probably unrelated to why it cannot recover the leadership but I am going to try to change back to 1 or use 2 different single threaded executors. > Flink Job stuck in suspend state after losing leadership in HA Mode > --- > > Key: FLINK-34007 > URL: https://issues.apache.org/jira/browse/FLINK-34007 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.19.0, 1.18.1, 1.18.2 >Reporter: Zhenqiu Huang >Priority: Blocker > Labels: pull-request-available > Attachments: Debug.log, LeaderElector-Debug.json, job-manager.log > > > The observation is that Job manager goes to suspend state with a failed > container not able to register itself to resource manager after timeout. > JM Log, see attached > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33719][table] Cleanup the usage of deprecated StreamTableEnvir… [flink]
snuyanzin commented on PR #23898: URL: https://github.com/apache/flink/pull/23898#issuecomment-1899850866 @liuyongvs are you going to continuwe working on this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-34164) [Benchmark] Compilation error since Jan. 16th
Zakelly Lan created FLINK-34164: --- Summary: [Benchmark] Compilation error since Jan. 16th Key: FLINK-34164 URL: https://issues.apache.org/jira/browse/FLINK-34164 Project: Flink Issue Type: Bug Components: Benchmarks Reporter: Zakelly Lan Assignee: Junrui Li An error occured during the benchmark compile: {code:java} 13:17:40 [ERROR] /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/warning:[options] bootstrap class path not set in conjunction with -source 8 13:17:40 /mnt/jenkins/workspace/flink-main-benchmarks/flink-benchmarks/src/main/java/org/apache/flink/benchmark/StreamGraphUtils.java:38:19: error: cannot find symbol {code} It seems related with the FLINK-33980 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34087][tests][JUnit5 Migration] Migarate to junit5 of flink-dist module [flink]
Jiabao-Sun commented on PR #24092: URL: https://github.com/apache/flink/pull/24092#issuecomment-1899841647 Hi @PatrickRen, could you help review it when you have time? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34132][runtime] Correct the error message and doc of AdaptiveBatch only supports all edges being BLOCKING or HYBRID_FULL/HYBRID_SELECTIVE. [flink]
JunRuiLee commented on PR #24118: URL: https://github.com/apache/flink/pull/24118#issuecomment-1899838542 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33221][core][config] Add config options for administrator JVM options [flink]
1996fanrui commented on PR #24098: URL: https://github.com/apache/flink/pull/24098#issuecomment-1899825447 Hi @X-czh , https://github.com/apache/flink/pull/24089 is merged, please go ahead, thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34083) Deprecate string configuration keys and unused constants in ConfigConstants
[ https://issues.apache.org/jira/browse/FLINK-34083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808476#comment-17808476 ] Rui Fan commented on FLINK-34083: - Merged to master(1.19) via: 38f7b51d0cc45293dc71ad31607ecc685b11498f 752d3a79a918b9300dd2b89e96f3915ba6a2dfa6 > Deprecate string configuration keys and unused constants in ConfigConstants > --- > > Key: FLINK-34083 > URL: https://issues.apache.org/jira/browse/FLINK-34083 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Xuannan Su >Assignee: Xuannan Su >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > * Update ConfigConstants.java to deprecate and replace string configuration > keys > * Mark unused constants in ConfigConstants.java as deprecated -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-34083) Deprecate string configuration keys and unused constants in ConfigConstants
[ https://issues.apache.org/jira/browse/FLINK-34083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-34083. - Resolution: Fixed > Deprecate string configuration keys and unused constants in ConfigConstants > --- > > Key: FLINK-34083 > URL: https://issues.apache.org/jira/browse/FLINK-34083 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Xuannan Su >Assignee: Xuannan Su >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > * Update ConfigConstants.java to deprecate and replace string configuration > keys > * Mark unused constants in ConfigConstants.java as deprecated -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34083][config] Deprecate string configuration keys and unused constants in ConfigConstants [flink]
1996fanrui merged PR #24089: URL: https://github.com/apache/flink/pull/24089 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34109][connectors] FileSystem sink connector restore job from historical checkpoint bugfix [flink]
ParyshevSergey commented on PR #24113: URL: https://github.com/apache/flink/pull/24113#issuecomment-1899822873 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34083][config] Deprecate string configuration keys and unused constants in ConfigConstants [flink]
1996fanrui commented on PR #24089: URL: https://github.com/apache/flink/pull/24089#issuecomment-1899822924 The CI is green, merging~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34080) Simplify the Configuration
[ https://issues.apache.org/jira/browse/FLINK-34080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808475#comment-17808475 ] Rui Fan commented on FLINK-34080: - Merged to master(1.19) via : 725b3edc05a1f3f186626038f8a7e60c1d8dd4fb a2cc47a71e17cb22a86fb19ddadcbf1fb4308274 79abfaab3460834887df9e8284dff51569b63ecc > Simplify the Configuration > -- > > Key: FLINK-34080 > URL: https://issues.apache.org/jira/browse/FLINK-34080 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Rui Fan >Assignee: Rui Fan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.19.0 > > > This Jira is 2.2 part of FLIP-405: > * 2.2.1 Update Configuration to encourage the usage of ConfigOption over > string configuration key > * 2.2.2 Introduce public T get(ConfigOption configOption, T > overrideDefault) > * 2.2.3 Deprecate some unnecessary setXxx and getXxx methods in Configuration -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-34080) Simplify the Configuration
[ https://issues.apache.org/jira/browse/FLINK-34080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-34080. - Resolution: Fixed > Simplify the Configuration > -- > > Key: FLINK-34080 > URL: https://issues.apache.org/jira/browse/FLINK-34080 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Rui Fan >Assignee: Rui Fan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.19.0 > > > This Jira is 2.2 part of FLIP-405: > * 2.2.1 Update Configuration to encourage the usage of ConfigOption over > string configuration key > * 2.2.2 Introduce public T get(ConfigOption configOption, T > overrideDefault) > * 2.2.3 Deprecate some unnecessary setXxx and getXxx methods in Configuration -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-20672] Catch throwable when sending checkpoint aborted messages from JM to TM [flink]
masteryhx merged PR #23676: URL: https://github.com/apache/flink/pull/23676 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34080][configuration] Simplify the Configuration [flink]
1996fanrui merged PR #24088: URL: https://github.com/apache/flink/pull/24088 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34080][configuration] Simplify the Configuration [flink]
1996fanrui commented on PR #24088: URL: https://github.com/apache/flink/pull/24088#issuecomment-1899820154 Thanks @Sxnan for the reviewing! The CI is green, merging~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33803] Set observedGeneration at end of reconciliation [flink-kubernetes-operator]
justin-chen commented on code in PR #755: URL: https://github.com/apache/flink-kubernetes-operator/pull/755#discussion_r1458380663 ## flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/FlinkDeploymentStatus.java: ## @@ -55,4 +55,7 @@ public class FlinkDeploymentStatus extends CommonStatus { /** Information about the TaskManagers for the scale subresource. */ private TaskManagerInfo taskManager; + +/** Last observed generation of the FlinkDeployment. */ +private Long observedGeneration; Review Comment: I have updated the PR to set `status.observedGeneration` using the [same source of truth](https://github.com/apache/flink-kubernetes-operator/blob/main/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/reconciler/ReconciliationMetadata.java#L44) as `status.reconciliationStatus.lastReconciledSpec.resource_metadata.generation`, such that we can later remove the latter field without the `observedGeneration` depending on it. The new field is set in `updateStatusForSpecReconciliation` method alongside the `lastReconciledSpec`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [hotfix] TestingTierConsumerAgent#peekNextBufferSubpartitionId shouldn't throw UnsupportedDataTypeException [flink]
flinkbot commented on PR #24147: URL: https://github.com/apache/flink/pull/24147#issuecomment-1899803891 ## CI report: * 11aee7489e6adb60aedebfdaee7dbb92c4f2b5af UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8
[ https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-34148: - Priority: Critical (was: Major) > Potential regression (Jan. 13): stringWrite with Java8 > -- > > Key: FLINK-34148 > URL: https://issues.apache.org/jira/browse/FLINK-34148 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Zakelly Lan >Priority: Critical > Fix For: 1.19.0 > > > Significant drop of performance in stringWrite with Java8 from commit > [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20] > to > [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51] > . It only involves strings not so long (128 or 4). > stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452 > stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989 > stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188 > stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927 > stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34148) Potential regression (Jan. 13): stringWrite with Java8
[ https://issues.apache.org/jira/browse/FLINK-34148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yun Tang updated FLINK-34148: - Fix Version/s: 1.19.0 > Potential regression (Jan. 13): stringWrite with Java8 > -- > > Key: FLINK-34148 > URL: https://issues.apache.org/jira/browse/FLINK-34148 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Zakelly Lan >Priority: Major > Fix For: 1.19.0 > > > Significant drop of performance in stringWrite with Java8 from commit > [881062f352|https://github.com/apache/flink/commit/881062f352f8bf8c21ab7cbea95e111fd82fdf20] > to > [5d9d8748b6|https://github.com/apache/flink/commit/5d9d8748b64ff1a75964a5cd2857ab5061312b51] > . It only involves strings not so long (128 or 4). > stringWrite.128.ascii(Java8) baseline=1089.107756 current_value=754.52452 > stringWrite.128.chinese(Java8) baseline=504.244575 current_value=295.358989 > stringWrite.128.russian(Java8) baseline=655.582639 current_value=421.030188 > stringWrite.4.chinese(Java8) baseline=9598.791964 current_value=6627.929927 > stringWrite.4.russian(Java8) baseline=11070.666415 current_value=8289.95767 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [hotfix] TestingTierConsumerAgent#peekNextBufferSubpartitionId shouldn't throw UnsupportedDataTypeException [flink]
reswqa opened a new pull request, #24147: URL: https://github.com/apache/flink/pull/24147 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34105) Akka timeout happens in TPC-DS benchmarks
[ https://issues.apache.org/jira/browse/FLINK-34105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808457#comment-17808457 ] Zhu Zhu commented on FLINK-34105: - [~lsdy] Sounds good to me. Feel free to open a PR for it. > Akka timeout happens in TPC-DS benchmarks > - > > Key: FLINK-34105 > URL: https://issues.apache.org/jira/browse/FLINK-34105 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.19.0 >Reporter: Zhu Zhu >Assignee: Yangze Guo >Priority: Critical > Attachments: image-2024-01-16-13-59-45-556.png > > > We noticed akka timeout happens in 10TB TPC-DS benchmarks in 1.19. The > problem did not happen in 1.18.0. > After bisecting, we find the problem was introduced in FLINK-33532. > !image-2024-01-16-13-59-45-556.png|width=800! -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34089] Verify that the subscribed topics match the assigned topics [flink-connector-kafka]
Tan-JiaLiang commented on PR #77: URL: https://github.com/apache/flink-connector-kafka/pull/77#issuecomment-1899714802 But now I am thinking more carefully, maybe just removing the topics from state that are not in the current subscription list is a better solution. @MartijnVisser @tzulitai WDYT? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34140) Rename WindowContext and TriggerContext in window
[ https://issues.apache.org/jira/browse/FLINK-34140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xuyang updated FLINK-34140: --- Description: Currently, WindowContext and TriggerContext not only contains a series of get methods to obtain context information, but also includes behaviors such as clear. Maybe it's better to rename them as WindowDelegator and TriggerDelegator or WindowHandler and TriggerHandler. was: Currently, WindowContext and TriggerContext not only contains a series of get methods to obtain context information, but also includes behaviors such as clear. Maybe it's better to rename them as WindowDelegator and TriggerDelegator. > Rename WindowContext and TriggerContext in window > - > > Key: FLINK-34140 > URL: https://issues.apache.org/jira/browse/FLINK-34140 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Runtime >Reporter: xuyang >Assignee: xuyang >Priority: Major > > Currently, WindowContext and TriggerContext not only contains a series of get > methods to obtain context information, but also includes behaviors such as > clear. > Maybe it's better to rename them as WindowDelegator and TriggerDelegator or > WindowHandler and TriggerHandler. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34149][Runtime/Checkpointing] Fix SplitEnumeratorContext compatibility issue [flink]
lindong28 commented on PR #24146: URL: https://github.com/apache/flink/pull/24146#issuecomment-1899635755 Thanks for the PR! LGTM. Will merge PR after the CI has passed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-33928) Should not throw exception while creating view with specify field names even if the query conflicts in field names
[ https://issues.apache.org/jira/browse/FLINK-33928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang closed FLINK-33928. - Fix Version/s: 1.19.0 Resolution: Fixed > Should not throw exception while creating view with specify field names even > if the query conflicts in field names > -- > > Key: FLINK-33928 > URL: https://issues.apache.org/jira/browse/FLINK-33928 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: xuyang >Assignee: Yunhong Zheng >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > > The following sql should be valid. > {code:java} > create view view1(a, b) as select t1.name, t2.name from t1 join t1 t2 on > t1.score = t2.score; {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33928) Should not throw exception while creating view with specify field names even if the query conflicts in field names
[ https://issues.apache.org/jira/browse/FLINK-33928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808447#comment-17808447 ] Shengkai Fang commented on FLINK-33928: --- Merged into master: 82fcdfe5634fb82d3ab4a183818d852119dc68a9 > Should not throw exception while creating view with specify field names even > if the query conflicts in field names > -- > > Key: FLINK-33928 > URL: https://issues.apache.org/jira/browse/FLINK-33928 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: xuyang >Assignee: Yunhong Zheng >Priority: Major > Labels: pull-request-available > > The following sql should be valid. > {code:java} > create view view1(a, b) as select t1.name, t2.name from t1 join t1 t2 on > t1.score = t2.score; {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34089] Verify that the subscribed topics match the assigned topics [flink-connector-kafka]
Tan-JiaLiang commented on PR #77: URL: https://github.com/apache/flink-connector-kafka/pull/77#issuecomment-1899618447 @MartijnVisser Suppose there is a flink job that subscribes two topics in a source at the same time, TopicA and TopicB. * Write some data to both TopicA and TopicB. * Stop the job with savepoint, the offset of TopicA and TopicB will be saved in savepoint. * Now I change the subscribe topic list, subscribe only the TopicB, and restore from the last savepoint. * The Flink job will still consume the TopicA record, which will make user feel confused. So I think it is better to clearly tell the user that we cannot restore from the savepoint because the subscribe list has been changed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34149][Runtime/Checkpointing] Fix SplitEnumeratorContext compatibility issue [flink]
flinkbot commented on PR #24146: URL: https://github.com/apache/flink/pull/24146#issuecomment-1899616723 ## CI report: * a9ab0eaa9f38ddca0f6ba23c450ececb33acc7ee UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33928][table-planner] Should not throw exception while creating view with specify field names [flink]
fsk119 merged PR #24096: URL: https://github.com/apache/flink/pull/24096 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34007) Flink Job stuck in suspend state after losing leadership in HA Mode
[ https://issues.apache.org/jira/browse/FLINK-34007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808441#comment-17808441 ] Yang Wang commented on FLINK-34007: --- I also remember that the fabric8 kubernetes-client community has a very good response. If the {{LeaderElector}} is designed for only run once, though I do not think this is the reasonable behavior, then we need to create a new {{LeaderElector}} when lost leadership. For option #3, it might be unnecessary because {{LeaderElector}} could work as expected when creating a new instance with same lock identity. It is a larger effort to do such refactor without additional benefits. BTW, maybe I miss some background. [~gyfora] Could you please share me why we need to change the thread pool to 3 in {{{}KubernetesLeaderElector{}}}? > Flink Job stuck in suspend state after losing leadership in HA Mode > --- > > Key: FLINK-34007 > URL: https://issues.apache.org/jira/browse/FLINK-34007 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.19.0, 1.18.1, 1.18.2 >Reporter: Zhenqiu Huang >Priority: Blocker > Labels: pull-request-available > Attachments: Debug.log, LeaderElector-Debug.json, job-manager.log > > > The observation is that Job manager goes to suspend state with a failed > container not able to register itself to resource manager after timeout. > JM Log, see attached > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-33950) Update max aggregate functions to new type system
[ https://issues.apache.org/jira/browse/FLINK-33950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808439#comment-17808439 ] Jacky Lau commented on FLINK-33950: --- hi [~twalthr] [~martijnvisser] [~dwysakowicz] what is your opinion? > Update max aggregate functions to new type system > - > > Key: FLINK-33950 > URL: https://issues.apache.org/jira/browse/FLINK-33950 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Affects Versions: 1.19.0 >Reporter: Jacky Lau >Priority: Major > Labels: pull-request-available > Fix For: 1.19.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34149][Runtime/Checkpointing] Fix SplitEnumeratorContext compatibility issue [flink]
yunfengzhou-hub commented on PR #24146: URL: https://github.com/apache/flink/pull/24146#issuecomment-1899595132 Hi @lindong28, could you please take a look at this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34149) Flink Kafka connector can't compile against 1.19-SNAPSHOT
[ https://issues.apache.org/jira/browse/FLINK-34149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34149: --- Labels: pull-request-available (was: ) > Flink Kafka connector can't compile against 1.19-SNAPSHOT > - > > Key: FLINK-34149 > URL: https://issues.apache.org/jira/browse/FLINK-34149 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Runtime / Checkpointing >Affects Versions: 1.19.0 >Reporter: Martijn Visser >Priority: Blocker > Labels: pull-request-available > Fix For: 1.19.0 > > > The Flink Kafka connector for {{main}} fails for 1.19-SNAPSHOT, see > https://github.com/apache/flink-connector-kafka/actions/runs/7569481434/job/20612876543#step:14:134 > {code:java} > Error: COMPILATION ERROR : > [INFO] - > Error: > /home/runner/work/flink-connector-kafka/flink-connector-kafka/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/dynamic/source/enumerator/StoppableKafkaEnumContextProxy.java:[65,8] > > org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy > is not abstract and does not override abstract method > setIsProcessingBacklog(boolean) in > org.apache.flink.api.connector.source.SplitEnumeratorContext > {code} > This interface seems to be added as part of > https://issues.apache.org/jira/browse/FLINK-32514 / > https://cwiki.apache.org/confluence/display/FLINK/FLIP-309%3A+Support+using+larger+checkpointing+interval+when+source+is+processing+backlog > The FLIP indicates that the changes should be backward compatible, but that > appears to have not been the case -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33951][table] Use aggCallNeedRetractions instead of needRetrac… [flink]
liuyongvs commented on code in PR #24015: URL: https://github.com/apache/flink/pull/24015#discussion_r1439142391 ## flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/agg/AggsHandlerCodeGenerator.scala: ## @@ -1230,8 +1232,26 @@ class AggsHandlerCodeGenerator( needReset: Boolean = false, needEmitValue: Boolean = false): Unit = { // check and validate the needed methods +aggBufferCodeGens.zipWithIndex.foreach { + case (aggBufferCodeGen, index) => +aggBufferCodeGen.checkNeededMethods( + needAccumulate, + needRetract && aggCallNeedRetractions(index), + needMerge, Review Comment: needRetract && index < aggCallNeedRetractions.length && aggCallNeedRetractions(index) 1. if needRetract is true, aggCallNeedRetractions can not be null, because it will be set by needRetract(aggCallNeedRetractions: Array[Boolean]) and we should add index < aggCallNeedRetractions.length because aggCalls have basic functions , while aggBufferCodeGens have countStar/distinct code function 3. if needRetract is false, although aggCallNeedRetractions is null, but needRetract is false, will not run the logical of aggCallNeedRetractions(index),. so it will not throw NPE or OutofBoundArray Exception -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] [FLINK-34149][Runtime/Checkpointing] Fix SplitEnumeratorContext compatibility issue [flink]
yunfengzhou-hub opened a new pull request, #24146: URL: https://github.com/apache/flink/pull/24146 ## What is the purpose of the change This PR adds a default implementation to SplitEnumeratorContext#setIsProcessingBacklog to fix the compatibility issue introduced in #22931. ## Brief change log - Adds a default implementation to SplitEnumeratorContext#setIsProcessingBacklog. ## Verifying this change It has been verified that the flink-connector-kafka repo will not throw compatibility exception about this method when building against flink-1.19-snapshot after the changes in this PR are introduced. The build would still fail due to other API-incompatible changes, though, and those changes are not related to #22931. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-32743][Connectors/Kafka] Parse data from kafka connect and convert it into regular JSON data [flink-connector-kafka]
sunxiaojian commented on PR #42: URL: https://github.com/apache/flink-connector-kafka/pull/42#issuecomment-1899549561 > @sunxiaojian Can you please rebase your PR? @MartijnVisser Thanks for your review, it has been processed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34104][autoscaler] Improve the ScalingReport format of autoscaling [flink-kubernetes-operator]
1996fanrui commented on code in PR #757: URL: https://github.com/apache/flink-kubernetes-operator/pull/757#discussion_r1458204048 ## flink-autoscaler/src/main/java/org/apache/flink/autoscaler/event/AutoscalerEventUtils.java: ## @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.autoscaler.event; + +import org.apache.flink.annotation.Experimental; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Pattern; +import java.util.stream.Collectors; + +/** The utils of {@link AutoScalerEventHandler}. */ +@Experimental +public class AutoscalerEventUtils { + +private static final Pattern SCALING_REPORT_SEPARATOR = Pattern.compile("\\{(.+?)\\}"); +private static final Pattern VERTEX_SCALING_REPORT_PATTERN = +Pattern.compile( +"Vertex ID (.*?) \\| Parallelism (.*?) -> (.*?) \\| Processing capacity (.*?) -> (.*?) \\| Target data rate (.*)"); + +/** Parse the scaling report from original scaling report event. */ +public static List parseVertexScalingReports(String scalingReport) { Review Comment: Keep it here for 2 reasons: 1. For test and ensure the ScalingReport can be parsed correctly - In the future, we need to be careful when modifying the contents of `ScalingReport` because its format is being parsed by some users. 2. The public utils can be used for all autoscaler users. - That's why I added the `@Experimental` for `AutoscalerEventUtils` class. For reason1, keep it here or test namespace are fine. For reason2, keep it here is easy to use for users. I can move it to test namespace if you think it's not necessary, thank you -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-23687) Introduce partitioned lookup join to enforce input of LookupJoin to hash shuffle by lookup keys
[ https://issues.apache.org/jira/browse/FLINK-23687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benchao Li reassigned FLINK-23687: -- Assignee: yunfan (was: Jing Zhang) > Introduce partitioned lookup join to enforce input of LookupJoin to hash > shuffle by lookup keys > --- > > Key: FLINK-23687 > URL: https://issues.apache.org/jira/browse/FLINK-23687 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: Jing Zhang >Assignee: yunfan >Priority: Major > Labels: pull-request-available, stale-assigned > > Add Sql query hint to enable LookupJoin shuffle by join key of left input -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-23687) Introduce partitioned lookup join to enforce input of LookupJoin to hash shuffle by lookup keys
[ https://issues.apache.org/jira/browse/FLINK-23687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808433#comment-17808433 ] Benchao Li commented on FLINK-23687: [~yunfanfight...@foxmail.com] Thanks for taking it, I've assigned to you. > Introduce partitioned lookup join to enforce input of LookupJoin to hash > shuffle by lookup keys > --- > > Key: FLINK-23687 > URL: https://issues.apache.org/jira/browse/FLINK-23687 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: Jing Zhang >Assignee: yunfan >Priority: Major > Labels: pull-request-available, stale-assigned > > Add Sql query hint to enable LookupJoin shuffle by join key of left input -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33264][table] Support source parallelism setting for DataGen connector [flink]
libenchao commented on PR #24133: URL: https://github.com/apache/flink/pull/24133#issuecomment-1899537774 > @libenchao Could you help review it after #24128 is merged? I assigned it to myself. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34080][configuration] Simplify the Configuration [flink]
1996fanrui commented on PR #24088: URL: https://github.com/apache/flink/pull/24088#issuecomment-1899532157 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33264][table] Support source parallelism setting for DataGen connector [flink]
X-czh commented on PR #24133: URL: https://github.com/apache/flink/pull/24133#issuecomment-1899531944 @libenchao Could you help review it after #24128 is merged? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33565][Scheduler] ConcurrentExceptions works with exception merging [flink]
1996fanrui commented on PR #24003: URL: https://github.com/apache/flink/pull/24003#issuecomment-1899532108 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34083][config] Deprecate string configuration keys and unused constants in ConfigConstants [flink]
1996fanrui commented on PR #24089: URL: https://github.com/apache/flink/pull/24089#issuecomment-1899531748 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34156) Move Flink Calcite rules from Scala to Java
[ https://issues.apache.org/jira/browse/FLINK-34156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808430#comment-17808430 ] Jacky Lau commented on FLINK-34156: --- hi [~Sergey Nuyanzin] can I also help with this task? > Move Flink Calcite rules from Scala to Java > --- > > Key: FLINK-34156 > URL: https://issues.apache.org/jira/browse/FLINK-34156 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 2.0.0 > > > This is an umbrella task for migration of Calcite rules from Scala to Java > mentioned at https://cwiki.apache.org/confluence/display/FLINK/2.0+Release > The reason is that since 1.28.0 ( CALCITE-4787 - Move core to use Immutables > instead of ImmutableBeans ) Calcite started to use Immutables > (https://immutables.github.io/) and since 1.29.0 removed ImmutableBeans ( > CALCITE-4839 - Remove remnants of ImmutableBeans post 1.28 release ). All > rule configuration related api which is not Immutables based is marked as > deprecated. Since Immutables implies code generation while java compilation > it is seems impossible to use for rules in Scala code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34156) Move Flink Calcite rules from Scala to Java
[ https://issues.apache.org/jira/browse/FLINK-34156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808425#comment-17808425 ] Jiabao Sun commented on FLINK-34156: Hi [~Sergey Nuyanzin], can I help with this task? > Move Flink Calcite rules from Scala to Java > --- > > Key: FLINK-34156 > URL: https://issues.apache.org/jira/browse/FLINK-34156 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 2.0.0 > > > This is an umbrella task for migration of Calcite rules from Scala to Java > mentioned at https://cwiki.apache.org/confluence/display/FLINK/2.0+Release > The reason is that since 1.28.0 ( CALCITE-4787 - Move core to use Immutables > instead of ImmutableBeans ) Calcite started to use Immutables > (https://immutables.github.io/) and since 1.29.0 removed ImmutableBeans ( > CALCITE-4839 - Remove remnants of ImmutableBeans post 1.28 release ). All > rule configuration related api which is not Immutables based is marked as > deprecated. Since Immutables implies code generation while java compilation > it is seems impossible to use for rules in Scala code. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34144) Update the documentation and configuration description about dynamic source parallelism inference
[ https://issues.apache.org/jira/browse/FLINK-34144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808424#comment-17808424 ] Zhu Zhu commented on FLINK-34144: - [~xiasun] Assigned. Feel free to open a PR for it. > Update the documentation and configuration description about dynamic source > parallelism inference > - > > Key: FLINK-34144 > URL: https://issues.apache.org/jira/browse/FLINK-34144 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.19.0 >Reporter: xingbe >Priority: Major > > [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource] > introduces the new feature of dynamic source parallelism inference, and we > plan to update the documentation and configuration items accordingly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34143) Modify the effective strategy of `execution.batch.adaptive.auto-parallelism.default-source-parallelism`
[ https://issues.apache.org/jira/browse/FLINK-34143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808423#comment-17808423 ] Zhu Zhu commented on FLINK-34143: - [~xiasun]Assigned. Feel free to open a PR for it. > Modify the effective strategy of > `execution.batch.adaptive.auto-parallelism.default-source-parallelism` > --- > > Key: FLINK-34143 > URL: https://issues.apache.org/jira/browse/FLINK-34143 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.19.0 >Reporter: xingbe >Assignee: xingbe >Priority: Major > > Currently, if users do not set the > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}} > configuration option, the AdaptiveBatchScheduler defaults to a parallelism > of 1 for source vertices. In > [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource], > the value of > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}} > will act as the upper bound for inferring dynamic source parallelism, and > continuing with the current policy is no longer appropriate. > We plan to change the effectiveness strategy of > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}}; > when the user does not set this config option, we will use the value of > `{{{}execution.batch.adaptive.auto-parallelism.max-parallelism`{}}} as the > upper bound for source parallelism inference. If > {{`execution.batch.adaptive.auto-parallelism.max-parallelism`}} is also not > configured, the value of `{{{}parallelism.default`{}}} will be used as a > fallback. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-34143) Modify the effective strategy of `execution.batch.adaptive.auto-parallelism.default-source-parallelism`
[ https://issues.apache.org/jira/browse/FLINK-34143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808423#comment-17808423 ] Zhu Zhu edited comment on FLINK-34143 at 1/19/24 1:54 AM: -- [~xiasun] Assigned. Feel free to open a PR for it. was (Author: zhuzh): [~xiasun]Assigned. Feel free to open a PR for it. > Modify the effective strategy of > `execution.batch.adaptive.auto-parallelism.default-source-parallelism` > --- > > Key: FLINK-34143 > URL: https://issues.apache.org/jira/browse/FLINK-34143 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.19.0 >Reporter: xingbe >Assignee: xingbe >Priority: Major > > Currently, if users do not set the > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}} > configuration option, the AdaptiveBatchScheduler defaults to a parallelism > of 1 for source vertices. In > [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource], > the value of > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}} > will act as the upper bound for inferring dynamic source parallelism, and > continuing with the current policy is no longer appropriate. > We plan to change the effectiveness strategy of > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}}; > when the user does not set this config option, we will use the value of > `{{{}execution.batch.adaptive.auto-parallelism.max-parallelism`{}}} as the > upper bound for source parallelism inference. If > {{`execution.batch.adaptive.auto-parallelism.max-parallelism`}} is also not > configured, the value of `{{{}parallelism.default`{}}} will be used as a > fallback. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-34143) Modify the effective strategy of `execution.batch.adaptive.auto-parallelism.default-source-parallelism`
[ https://issues.apache.org/jira/browse/FLINK-34143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhu Zhu reassigned FLINK-34143: --- Assignee: xingbe > Modify the effective strategy of > `execution.batch.adaptive.auto-parallelism.default-source-parallelism` > --- > > Key: FLINK-34143 > URL: https://issues.apache.org/jira/browse/FLINK-34143 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.19.0 >Reporter: xingbe >Assignee: xingbe >Priority: Major > > Currently, if users do not set the > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}} > configuration option, the AdaptiveBatchScheduler defaults to a parallelism > of 1 for source vertices. In > [FLIP-379|https://cwiki.apache.org/confluence/display/FLINK/FLIP-379%3A+Dynamic+source+parallelism+inference+for+batch+jobs#FLIP379:Dynamicsourceparallelisminferenceforbatchjobs-IntroduceDynamicParallelismInferenceinterfaceforSource], > the value of > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}} > will act as the upper bound for inferring dynamic source parallelism, and > continuing with the current policy is no longer appropriate. > We plan to change the effectiveness strategy of > `{{{}execution.batch.adaptive.auto-parallelism.default-source-parallelism`{}}}; > when the user does not set this config option, we will use the value of > `{{{}execution.batch.adaptive.auto-parallelism.max-parallelism`{}}} as the > upper bound for source parallelism inference. If > {{`execution.batch.adaptive.auto-parallelism.max-parallelism`}} is also not > configured, the value of `{{{}parallelism.default`{}}} will be used as a > fallback. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-33819] support set CompressionType for RocksDBStateBackend [flink]
masteryhx commented on code in PR #24072: URL: https://github.com/apache/flink/pull/24072#discussion_r1458185586 ## docs/layouts/shortcodes/generated/rocksdb_configurable_configuration.html: ## @@ -62,6 +62,12 @@ Enum The specified compaction style for DB. Candidate compaction style is LEVEL, FIFO, UNIVERSAL or NONE, and Flink chooses 'LEVEL' as default style.Possible values:"LEVEL""UNIVERSAL""FIFO""NONE" + Review Comment: The CI failed seems because the doc is not consistent with [RocksDBConfigurableOptions.java](https://github.com/apache/flink/pull/24072/files#diff-162968bbd8c0d2dbfd91b191a97fa012ab1b3c27a329a2581746a66ad8ac76a3), Could you check it ? You could regenerate the doc follwing the method in README of flink-docs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33819] support set CompressionType for RocksDBStateBackend [flink]
masteryhx commented on code in PR #24072: URL: https://github.com/apache/flink/pull/24072#discussion_r1458185586 ## docs/layouts/shortcodes/generated/rocksdb_configurable_configuration.html: ## @@ -62,6 +62,12 @@ Enum The specified compaction style for DB. Candidate compaction style is LEVEL, FIFO, UNIVERSAL or NONE, and Flink chooses 'LEVEL' as default style.Possible values:"LEVEL""UNIVERSAL""FIFO""NONE" + Review Comment: The CI failed seems because the doc is not consistent with [RocksDBConfigurableOptions.java](https://github.com/apache/flink/pull/24072/files#diff-162968bbd8c0d2dbfd91b191a97fa012ab1b3c27a329a2581746a66ad8ac76a3), Could you check it ? You could regenerate the doc follwing the method in README in flink-docs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33819] support set CompressionType for RocksDBStateBackend [flink]
masteryhx commented on code in PR #24072: URL: https://github.com/apache/flink/pull/24072#discussion_r1458185586 ## docs/layouts/shortcodes/generated/rocksdb_configurable_configuration.html: ## @@ -62,6 +62,12 @@ Enum The specified compaction style for DB. Candidate compaction style is LEVEL, FIFO, UNIVERSAL or NONE, and Flink chooses 'LEVEL' as default style.Possible values:"LEVEL""UNIVERSAL""FIFO""NONE" + Review Comment: The CI failed seems because the doc is not consistent with [RocksDBConfigurableOptions.java](https://github.com/apache/flink/pull/24072/files#diff-162968bbd8c0d2dbfd91b191a97fa012ab1b3c27a329a2581746a66ad8ac76a3), Could you check it ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34135) A number of ci failures with Access to the path '.../_work/_temp/containerHandlerInvoker.js' is denied.
[ https://issues.apache.org/jira/browse/FLINK-34135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808420#comment-17808420 ] Yang Wang commented on FLINK-34135: --- The CI should work since the {{containerHandlerInvoker.js}} and other files are recreated after restarted the AZure agents. > A number of ci failures with Access to the path > '.../_work/_temp/containerHandlerInvoker.js' is denied. > --- > > Key: FLINK-34135 > URL: https://issues.apache.org/jira/browse/FLINK-34135 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Reporter: Sergey Nuyanzin >Assignee: Jeyhun Karimov >Priority: Blocker > Labels: test-stability > > There is a number of builds failing with something like > {noformat} > ##[error]Access to the path > '/home/agent03/myagent/_work/_temp/containerHandlerInvoker.js' is denied. > {noformat} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56490=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=fb588352-ef18-568d-b447-699986250ccb > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=554d7c3f-d38e-55f4-96b4-ada3a9cb7d6f=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=fa307d6d-91b1-5ab6-d460-ef50f552b1fe=1798d435-832b-51fe-a9ad-efb9abf4ab04=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=a1ac4ce4-9a4f-5fdb-3290-7e163fba19dc=e4c57254-ec06-5788-3f8e-5ad5dffb418e=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=56881383-f398-5091-6b3b-22a7eeb7cfa8=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=2d9c27d0-8dbb-5be9-7271-453f74f48ab3=9 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56481=logs=162f98f7-8967-5f47-2782-a1e178ec2ad3=c9934c56-710d-5f85-d2b8-28ec1fd700ed=9 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-24024][table-planner] support session window tvf in plan [flink]
xuyangzhong commented on PR #23505: URL: https://github.com/apache/flink/pull/23505#issuecomment-1899499981 Hi, @snuyanzin . Regarding this PR, my original intention was to implement it according to the existing FLIP and not to introduce features that haven't been discussed. IMO, although Calcite provides support for `ORDER BY` on `SET SEMANTICS TABLE`, supporting `order by` for session window needs to be further discussed in a separate JIRA. As far as this PR is concerned, I have reserved the `order by` field in `RexSetSemanticsTableCall`. If it is discussed in a subsequent separate JIRA thread that it is necessary to add `order by` syntax to the session window tvf, we can also quickly support it. WDYT? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34155) Recurring SqlExecutionException
[ https://issues.apache.org/jira/browse/FLINK-34155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17808419#comment-17808419 ] lincoln lee commented on FLINK-34155: - [~jeyhunkarimov] thanks for the investigation! Could you offer more information of the test cases related to above exceptions? I saw there were several cases expected these error msg, e.g., statement_set.q and begin_statement_set.q > Recurring SqlExecutionException > --- > > Key: FLINK-34155 > URL: https://issues.apache.org/jira/browse/FLINK-34155 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.8.0 >Reporter: Jeyhun Karimov >Priority: Blocker > Labels: test > Attachments: disk-full.log > > > When analyzing very big maven log file in our CI system, I found out that > there is a recurring {{{}SqlException (subset of the log file is > attached){}}}: > > {{org.apache.flink.table.gateway.service.utils.SqlExecutionException: Only > 'INSERT/CREATE TABLE AS' statement is allowed in Statement Set or use 'END' > statement to submit Statement Set.}} > > > which leads to: > > {{06:31:41,155 [flink-rest-server-netty-worker-thread-22] ERROR > org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] > - Unhandled exception.}} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-30656) Provide more logs for schema compatibility check
[ https://issues.apache.org/jira/browse/FLINK-30656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807968#comment-17807968 ] Hangxiang Yu edited comment on FLINK-30656 at 1/19/24 1:37 AM: --- We should support to remain some messages for TypeSerializerSchemaCompatibility just like SchemaCompatibility in Avro. Then every TypeSerializer could defined their own message about compatibility. I have two proposals: 1. Add new method called TypeSerializerSchemaCompatibility#incompatible and #compatibleAfterMigration to support message, e.g. TypeSerializerSchemaCompatibility#incompatible(String message). And deprecated related old methods. {code:java} public static TypeSerializerSchemaCompatibility incompatible(String message) { return new TypeSerializerSchemaCompatibility<>(Type.INCOMPATIBLE, message, null); } {code} 2. Add a new method called TypeSerializerSchemaCompatibility#withMessage: {code:java} private TypeSerializerSchemaCompatibility withMessage(String message) { this.message = message; return this; } {code} Proposal 1 behaves just like SchemaCompatibility in Avro who forces caller to add message. But since TypeSerializerSchemaCompatibility is a PublicEvolving API, maybe we need a FLIP firstly? Proposal 2 just add a new method so that we will not break change, but every callers (including some custom-defined TypeSerializers) should call it manually because it will not fail when compile. [~leonard] [~pnowojski] [~Weijie Guo] WDYT? was (Author: masteryhx): We should support to remain some messages for TypeSerializerSchemaCompatibility just like SchemaCompatibility in Avro. Then every TypeSerializer could defined their own message about compatibility. I have two proposals: 1. Add new method called TypeSerializerSchemaCompatibility#incompatible and #compatibleAfterMigration to support message, e.g. TypeSerializerSchemaCompatibility#incompatible(String message). And deprecated related old methods. {code:java} public static TypeSerializerSchemaCompatibility incompatible(String message) { return new TypeSerializerSchemaCompatibility<>(Type.INCOMPATIBLE, message, null); } {code} 2. Add a new method called TypeSerializerSchemaCompatibility#withMessage: {code:java} private TypeSerializerSchemaCompatibility withMessage(String message) { this.message = message; return this; } {code} Proposal 1 behaves just like SchemaCompatibility in Avro who forces caller to add message. But since TypeSerializerSchemaCompatibility is a PublicEvolving API, maybe we need a FLIP firstly? Proposal 2 just add a new method so that we will not break change, but every callers (including some custom-defined TypeSerializers) should call it manually because it will not fail when compile. [~leonard] [~Weijie Guo] WDYT? > Provide more logs for schema compatibility check > > > Key: FLINK-30656 > URL: https://issues.apache.org/jira/browse/FLINK-30656 > Project: Flink > Issue Type: Improvement > Components: API / Type Serialization System >Reporter: Hangxiang Yu >Assignee: Hangxiang Yu >Priority: Major > > Currently, we have very few logs and exception info when checking schema > compatibility. > It's difficult to see why the compatibility is not compatible, especially for > some complicated nested serializers. > For example, for map serializer, when it's not compatible, we may only see > below without other information: > {code:java} > Caused by: org.apache.flink.util.StateMigrationException: The new state > serializer > (org.apache.flink.api.common.typeutils.base.MapSerializer@e95e076a) must not > be incompatible with the old state serializer > (org.apache.flink.api.common.typeutils.base.MapSerializer@c33b100f). {code} > So I think we could add more infos when checking the compatibility. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34038) IncrementalGroupAggregateRestoreTest.testRestore fails
[ https://issues.apache.org/jira/browse/FLINK-34038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bonnie Varghese updated FLINK-34038: Parent: FLINK-33421 Issue Type: Sub-task (was: Bug) > IncrementalGroupAggregateRestoreTest.testRestore fails > -- > > Key: FLINK-34038 > URL: https://issues.apache.org/jira/browse/FLINK-34038 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Affects Versions: 1.19.0 >Reporter: Matthias Pohl >Assignee: Bonnie Varghese >Priority: Major > Labels: test-stability > > {{IncrementalGroupAggregateRestoreTest.testRestore}} fails on {{master}}: > {code} > Jan 08 18:53:18 18:53:18.406 [ERROR] Tests run: 3, Failures: 1, Errors: 0, > Skipped: 1, Time elapsed: 8.706 s <<< FAILURE! -- in > org.apache.flink.table.planner.plan.nodes.exec.stream.IncrementalGroupAggregateRestoreTest > Jan 08 18:53:18 18:53:18.406 [ERROR] > org.apache.flink.table.planner.plan.nodes.exec.stream.IncrementalGroupAggregateRestoreTest.testRestore(TableTestProgram, > ExecNodeMetadata)[2] -- Time elapsed: 1.368 s <<< FAILURE! > Jan 08 18:53:18 java.lang.AssertionError: > Jan 08 18:53:18 > Jan 08 18:53:18 Expecting actual: > Jan 08 18:53:18 ["+I[1, 5, 2, 3]", > Jan 08 18:53:18 "+I[2, 2, 1, 1]", > Jan 08 18:53:18 "-U[1, 5, 2, 3]", > Jan 08 18:53:18 "+U[1, 3, 2, 2]", > Jan 08 18:53:18 "-U[1, 3, 2, 2]", > Jan 08 18:53:18 "+U[1, 9, 3, 4]"] > Jan 08 18:53:18 to contain exactly in any order: > Jan 08 18:53:18 ["+I[1, 5, 2, 3]", "+I[2, 2, 1, 1]", "-U[1, 5, 2, 3]", > "+U[1, 9, 3, 4]"] > Jan 08 18:53:18 but the following elements were unexpected: > Jan 08 18:53:18 ["+U[1, 3, 2, 2]", "-U[1, 3, 2, 2]"] > Jan 08 18:53:18 > Jan 08 18:53:18 at > org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:292) > Jan 08 18:53:18 at java.lang.reflect.Method.invoke(Method.java:498) > [...] > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56110=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=10822 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34163][table] Migration of SimplifyJoinConditionRule to java [flink]
flinkbot commented on PR #24145: URL: https://github.com/apache/flink/pull/24145#issuecomment-1899376797 ## CI report: * 2328e1349a9515b710ab28ab63549ac91e9302ab UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-24024][table-planner] support session window tvf in plan [flink]
snuyanzin commented on PR #23505: URL: https://github.com/apache/flink/pull/23505#issuecomment-1899372916 I would vote for having order by support as welll within this PR (assuming that there is no blockers for that). WDYT? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34162][table] Migrate LogicalUnnestRule to java [flink]
flinkbot commented on PR #24144: URL: https://github.com/apache/flink/pull/24144#issuecomment-1899370307 ## CI report: * fb9cb9e1dbcc55e715f9345e7a676f04dd3aab44 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33365] include filters with Lookup joins [flink-connector-jdbc]
snuyanzin commented on code in PR #79: URL: https://github.com/apache/flink-connector-jdbc/pull/79#discussion_r1458074979 ## flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunction.java: ## @@ -116,6 +124,15 @@ public void open(FunctionContext context) throws Exception { } } +private FieldNamedPreparedStatement setPredicateParams(FieldNamedPreparedStatement statement) +throws SQLException { +for (int i = 0; i < pushdownParams.length; ++i) { Review Comment: It seems that currently so far there no test checking what happens if we enter inside this loop -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34163) Migrate SimplifyJoinConditionRule
[ https://issues.apache.org/jira/browse/FLINK-34163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34163: --- Labels: pull-request-available (was: ) > Migrate SimplifyJoinConditionRule > - > > Key: FLINK-34163 > URL: https://issues.apache.org/jira/browse/FLINK-34163 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-34163][table] Migration of SimplifyJoinConditionRule to java [flink]
snuyanzin opened a new pull request, #24145: URL: https://github.com/apache/flink/pull/24145 ## What is the purpose of the change The PR migrates `SimplifyJoinConditionRule` to java it doesn't touch `SimplifyJoinConditionRuleTest` to be sure that java version continues passing it ## Verifying this change This change is already covered by existing tests ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): ( no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: ( no) - The serializers: ( no ) - The runtime per-record code paths (performance sensitive): ( no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no ) - The S3 file system connector: ( no) ## Documentation - Does this pull request introduce a new feature? ( no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34161][table] Migration of RewriteMinusAllRule to java [flink]
flinkbot commented on PR #24143: URL: https://github.com/apache/flink/pull/24143#issuecomment-1899363542 ## CI report: * 24534c846740a8abf784616925b68b4418d1db16 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-34163) Migrate SimplifyJoinConditionRule
Sergey Nuyanzin created FLINK-34163: --- Summary: Migrate SimplifyJoinConditionRule Key: FLINK-34163 URL: https://issues.apache.org/jira/browse/FLINK-34163 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: Sergey Nuyanzin Assignee: Sergey Nuyanzin -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34160][table] Migration of FlinkCalcMergeRule to java [flink]
flinkbot commented on PR #24142: URL: https://github.com/apache/flink/pull/24142#issuecomment-1899362772 ## CI report: * 80fbc2393ec077476aec2eecb498c2ebf29aff96 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34162) Migrate LogicalUnnestRule
[ https://issues.apache.org/jira/browse/FLINK-34162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34162: --- Labels: pull-request-available (was: ) > Migrate LogicalUnnestRule > - > > Key: FLINK-34162 > URL: https://issues.apache.org/jira/browse/FLINK-34162 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: Sergey Nuyanzin >Assignee: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34162) Migrate LogicalUnnestRule
Sergey Nuyanzin created FLINK-34162: --- Summary: Migrate LogicalUnnestRule Key: FLINK-34162 URL: https://issues.apache.org/jira/browse/FLINK-34162 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: Sergey Nuyanzin Assignee: Sergey Nuyanzin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-34162][table] Migrate LogicalUnnestRule to java [flink]
snuyanzin opened a new pull request, #24144: URL: https://github.com/apache/flink/pull/24144 ## What is the purpose of the change The PR migrates `LogicalUnnestRule` to java it doesn't touch `LogicalUnnestRuleTest` to be sure that java version continues passing it ## Verifying this change This change is already covered by existing tests ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): ( no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: ( no) - The serializers: ( no) - The runtime per-record code paths (performance sensitive): ( no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no ) - The S3 file system connector: ( no ) ## Documentation - Does this pull request introduce a new feature? ( no) - If yes, how is the feature documented? (not applicable ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org