Re: [PR] [FLINK-35512[flink-clinets] Don't depend on the flink-clients jar actually existing. [flink]

2024-06-12 Thread via GitHub


flinkbot commented on PR #24931:
URL: https://github.com/apache/flink/pull/24931#issuecomment-2164497999

   
   ## CI report:
   
   * af743054861262cb5362528ff0e867717df16916 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34482][checkpoint] Rename checkpointing options [flink]

2024-06-12 Thread via GitHub


masteryhx closed pull request #24878: [FLINK-34482][checkpoint] Rename 
checkpointing options
URL: https://github.com/apache/flink/pull/24878


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][checkpointing] Normalize file-merging sub dir [flink]

2024-06-12 Thread via GitHub


zoltar9264 commented on code in PR #24928:
URL: https://github.com/apache/flink/pull/24928#discussion_r1637586337


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/filemerging/AcrossCheckpointFileMergingSnapshotManager.java:
##
@@ -31,12 +33,13 @@ public class AcrossCheckpointFileMergingSnapshotManager 
extends FileMergingSnaps
 private final PhysicalFilePool filePool;
 
 public AcrossCheckpointFileMergingSnapshotManager(
-String id,
+JobID jobId,
+ResourceID tmResourceId,

Review Comment:
   Thanks for this suggestion , I moved the generation of id to builder.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35587) Job fails with "The read buffer is null in credit-based input channel" on TPC-DS 10TB benchmark

2024-06-12 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-35587:
--
Description: 
While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
encountered a failure in certain queries, specifically query 75, resulting in 
the error "The read buffer is null in credit-based input channel".

Using a binary search approach, I identified the offending commit as 
FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
disappears. Re-applying FLINK-33668 to the branch re-introduces the error.

Please see the attached image for the error stack trace.

!image-2024-06-13-13-48-37-162.png|width=846,height=555!

  was:
While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
encountered a failure in certain queries, specifically query 75, resulting in 
the error "The read buffer is null in credit-based input channel".

Using a binary search approach, I identified the offending commit as 
FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
disappears. Re-applying FLINK-33668 to the branch re-introduces the error.

Please see the attached image for the error stack trace.

!image-2024-06-13-13-48-37-162.png!


> Job fails with "The read buffer is null in credit-based input channel" on 
> TPC-DS 10TB benchmark
> ---
>
> Key: FLINK-35587
> URL: https://issues.apache.org/jira/browse/FLINK-35587
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Junrui Li
>Priority: Major
> Attachments: image-2024-06-13-13-48-37-162.png
>
>
> While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
> encountered a failure in certain queries, specifically query 75, resulting in 
> the error "The read buffer is null in credit-based input channel".
> Using a binary search approach, I identified the offending commit as 
> FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
> disappears. Re-applying FLINK-33668 to the branch re-introduces the error.
> Please see the attached image for the error stack trace.
> !image-2024-06-13-13-48-37-162.png|width=846,height=555!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35587) Job fails with "The read buffer is null in credit-based input channel" on TPC-DS 10TB benchmark

2024-06-12 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-35587:
--
Description: 
While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
encountered a failure in certain queries, specifically query 75, resulting in 
the error "The read buffer is null in credit-based input channel".

Using a binary search approach, I identified the offending commit as 
FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
disappears. Re-applying FLINK-33668 to the branch re-introduces the error.

Please see the attached image for the error stack trace.

!image-2024-06-13-13-48-37-162.png!

  was:
While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
encountered a failure in certain queries, specifically query 75, resulting in 
the error "The read buffer is null in credit-based input channel".

Using a binary search approach, I identified the offending commit as 
FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
disappears. Re-applying FLINK-33668 to the branch re-introduces the error.


> Job fails with "The read buffer is null in credit-based input channel" on 
> TPC-DS 10TB benchmark
> ---
>
> Key: FLINK-35587
> URL: https://issues.apache.org/jira/browse/FLINK-35587
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Junrui Li
>Priority: Major
> Attachments: image-2024-06-13-13-48-37-162.png
>
>
> While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
> encountered a failure in certain queries, specifically query 75, resulting in 
> the error "The read buffer is null in credit-based input channel".
> Using a binary search approach, I identified the offending commit as 
> FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
> disappears. Re-applying FLINK-33668 to the branch re-introduces the error.
> Please see the attached image for the error stack trace.
> !image-2024-06-13-13-48-37-162.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35587) Job fails with "The read buffer is null in credit-based input channel" on TPC-DS 10TB benchmark

2024-06-12 Thread Junrui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junrui Li updated FLINK-35587:
--
Attachment: image-2024-06-13-13-48-37-162.png

> Job fails with "The read buffer is null in credit-based input channel" on 
> TPC-DS 10TB benchmark
> ---
>
> Key: FLINK-35587
> URL: https://issues.apache.org/jira/browse/FLINK-35587
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Junrui Li
>Priority: Major
> Attachments: image-2024-06-13-13-48-37-162.png
>
>
> While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
> encountered a failure in certain queries, specifically query 75, resulting in 
> the error "The read buffer is null in credit-based input channel".
> Using a binary search approach, I identified the offending commit as 
> FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
> disappears. Re-applying FLINK-33668 to the branch re-introduces the error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35587) Job fails with "The read buffer is null in credit-based input channel" on TPC-DS 10TB benchmark

2024-06-12 Thread Junrui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854632#comment-17854632
 ] 

Junrui Li commented on FLINK-35587:
---

[~Weijie Guo] [~tanyuxin] Would you mind taking a look? Thanks in advance!

> Job fails with "The read buffer is null in credit-based input channel" on 
> TPC-DS 10TB benchmark
> ---
>
> Key: FLINK-35587
> URL: https://issues.apache.org/jira/browse/FLINK-35587
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Junrui Li
>Priority: Major
>
> While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
> encountered a failure in certain queries, specifically query 75, resulting in 
> the error "The read buffer is null in credit-based input channel".
> Using a binary search approach, I identified the offending commit as 
> FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
> disappears. Re-applying FLINK-33668 to the branch re-introduces the error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35587) Job fails with "The read buffer is null in credit-based input channel" on TPC-DS 10TB benchmark

2024-06-12 Thread Junrui Li (Jira)
Junrui Li created FLINK-35587:
-

 Summary: Job fails with "The read buffer is null in credit-based 
input channel" on TPC-DS 10TB benchmark
 Key: FLINK-35587
 URL: https://issues.apache.org/jira/browse/FLINK-35587
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.20.0
Reporter: Junrui Li


While running TPC-DS 10TB benchmark on the latest master branch locally, I've 
encountered a failure in certain queries, specifically query 75, resulting in 
the error "The read buffer is null in credit-based input channel".

Using a binary search approach, I identified the offending commit as 
FLINK-33668. After reverting FLINK-33668 and subsequent commits, the issue 
disappears. Re-applying FLINK-33668 to the branch re-introduces the error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35512) ArtifactFetchManagerTest unit tests fail

2024-06-12 Thread Sam Barker (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854629#comment-17854629
 ] 

Sam Barker commented on FLINK-35512:


I've just opened a [PR|https://github.com/apache/flink/pull/24931] with a fix 
very similar to that suggested by [~robyoung].

> ArtifactFetchManagerTest unit tests fail
> 
>
> Key: FLINK-35512
> URL: https://issues.apache.org/jira/browse/FLINK-35512
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.19.1
>Reporter: Hong Liang Teoh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.2
>
>
> The below three tests from *ArtifactFetchManagerTest* seem to fail 
> consistently:
>  * ArtifactFetchManagerTest.testFileSystemFetchWithAdditionalUri
>  * ArtifactFetchManagerTest.testMixedArtifactFetch
>  * ArtifactFetchManagerTest.testHttpFetch
> The error printed is
> {code:java}
> java.lang.AssertionError:
> Expecting actual not to be empty
>     at 
> org.apache.flink.client.program.artifact.ArtifactFetchManagerTest.getFlinkClientsJar(ArtifactFetchManagerTest.java:248)
>     at 
> org.apache.flink.client.program.artifact.ArtifactFetchManagerTest.testMixedArtifactFetch(ArtifactFetchManagerTest.java:146)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>     at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35512) ArtifactFetchManagerTest unit tests fail

2024-06-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35512:
---
Labels: pull-request-available  (was: )

> ArtifactFetchManagerTest unit tests fail
> 
>
> Key: FLINK-35512
> URL: https://issues.apache.org/jira/browse/FLINK-35512
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.19.1
>Reporter: Hong Liang Teoh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.2
>
>
> The below three tests from *ArtifactFetchManagerTest* seem to fail 
> consistently:
>  * ArtifactFetchManagerTest.testFileSystemFetchWithAdditionalUri
>  * ArtifactFetchManagerTest.testMixedArtifactFetch
>  * ArtifactFetchManagerTest.testHttpFetch
> The error printed is
> {code:java}
> java.lang.AssertionError:
> Expecting actual not to be empty
>     at 
> org.apache.flink.client.program.artifact.ArtifactFetchManagerTest.getFlinkClientsJar(ArtifactFetchManagerTest.java:248)
>     at 
> org.apache.flink.client.program.artifact.ArtifactFetchManagerTest.testMixedArtifactFetch(ArtifactFetchManagerTest.java:146)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>     at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>     at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35512[flink-clinets] Don't depend on the flink-clients jar actually existing. [flink]

2024-06-12 Thread via GitHub


SamBarker opened a new pull request, #24931:
URL: https://github.com/apache/flink/pull/24931

   
   
   ## What is the purpose of the change
   
   Ensure that running `mvn clean verify` works for flink-clients (as is the 
recommended step from [contributing to 
flink](https://flink.apache.org/how-to-contribute/contribute-code/#3-open-a-pull-request)).
   
   
   ## Brief change log
   
 - *Unit test fix: not to depend on the artefact currently being built*

   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   
   This change added tests and can be verified as follows:
   
 - *running `mvn clean package -pl :flink-clients`*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**|


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26951) Add HASH supported in SQL & Table API

2024-06-12 Thread Kartikey Pant (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854626#comment-17854626
 ] 

Kartikey Pant commented on FLINK-26951:
---

[~lincoln.86xy] - Absolutely, let's channel our efforts into those 
unimplemented features.

> Add HASH supported in SQL & Table API
> -
>
> Key: FLINK-26951
> URL: https://issues.apache.org/jira/browse/FLINK-26951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.20.0
>
>
> Returns a hash value of the arguments.
> Syntax:
> {code:java}
> hash(expr1, ...) {code}
> Arguments:
>  * {{{}exprN{}}}: An expression of any type.
> Returns:
> An INTEGER.
> Examples:
> {code:java}
> > SELECT hash('Flink', array(123), 2);
>  -1321691492 {code}
> See more:
>  * [Spark|https://spark.apache.org/docs/latest/api/sql/index.html#hash]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][checkpointing] Normalize file-merging sub dir [flink]

2024-06-12 Thread via GitHub


Zakelly commented on code in PR #24928:
URL: https://github.com/apache/flink/pull/24928#discussion_r1637493879


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/filemerging/AcrossCheckpointFileMergingSnapshotManager.java:
##
@@ -31,12 +33,13 @@ public class AcrossCheckpointFileMergingSnapshotManager 
extends FileMergingSnaps
 private final PhysicalFilePool filePool;
 
 public AcrossCheckpointFileMergingSnapshotManager(
-String id,
+JobID jobId,
+ResourceID tmResourceId,

Review Comment:
   I'd suggest a id string for `FileMergingManager`. It only need a unique id 
without the knowledge of job or resource id.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34545][cdc-pipeline-connector]Add OceanBase pipeline connector to Flink CDC [flink-cdc]

2024-06-12 Thread via GitHub


whhe commented on code in PR #3360:
URL: https://github.com/apache/flink-cdc/pull/3360#discussion_r1637472845


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-oceanbase/src/main/java/org/apache/flink/cdc/connectors/oceanbase/sink/OceanBaseEventSerializationSchema.java:
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.oceanbase.sink;
+
+import org.apache.flink.cdc.common.data.RecordData;
+import org.apache.flink.cdc.common.event.CreateTableEvent;
+import org.apache.flink.cdc.common.event.DataChangeEvent;
+import org.apache.flink.cdc.common.event.Event;
+import org.apache.flink.cdc.common.event.OperationType;
+import org.apache.flink.cdc.common.event.SchemaChangeEvent;
+import org.apache.flink.cdc.common.event.TableId;
+import org.apache.flink.cdc.common.schema.Column;
+import org.apache.flink.cdc.common.schema.Schema;
+import org.apache.flink.cdc.common.utils.Preconditions;
+import org.apache.flink.cdc.common.utils.SchemaUtils;
+
+import org.apache.flink.shaded.guava31.com.google.common.collect.Lists;
+
+import com.oceanbase.connector.flink.table.DataChangeRecord;
+import com.oceanbase.connector.flink.table.Record;
+import com.oceanbase.connector.flink.table.RecordSerializationSchema;
+import com.oceanbase.connector.flink.table.TableInfo;
+
+import java.time.ZoneId;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/** A serializer for Event to Record. */
+public class OceanBaseEventSerializationSchema implements 
RecordSerializationSchema {
+
+private final Map schemaMaps = new HashMap<>();
+
+/** ZoneId from pipeline config to support timestamp with local time zone. 
*/
+public final ZoneId pipelineZoneId;
+
+public OceanBaseEventSerializationSchema(ZoneId zoneId) {
+pipelineZoneId = zoneId;
+}
+
+@Override
+public Record serialize(Event event) {
+if (event instanceof DataChangeEvent) {
+return applyDataChangeEvent((DataChangeEvent) event);
+} else if (event instanceof SchemaChangeEvent) {
+SchemaChangeEvent schemaChangeEvent = (SchemaChangeEvent) event;
+TableId tableId = schemaChangeEvent.tableId();
+if (event instanceof CreateTableEvent) {
+schemaMaps.put(tableId, ((CreateTableEvent) 
event).getSchema());
+} else {
+if (!schemaMaps.containsKey(tableId)) {
+throw new RuntimeException("schema of " + tableId + " is 
not existed.");
+}
+schemaMaps.put(
+tableId,
+SchemaUtils.applySchemaChangeEvent(
+schemaMaps.get(tableId), schemaChangeEvent));
+}
+}
+return null;
+}
+
+private Record applyDataChangeEvent(DataChangeEvent event) {
+TableId tableId = event.tableId();
+Schema schema = schemaMaps.get(tableId);
+Preconditions.checkNotNull(schema, event.tableId() + " is not 
existed");
+Object[] values;
+OperationType op = event.op();
+boolean isDelete = false;
+switch (op) {
+case INSERT:
+case UPDATE:
+case REPLACE:
+values = serializerRecord(event.after(), schema);
+break;
+case DELETE:
+values = serializerRecord(event.before(), schema);
+isDelete = true;
+break;
+default:
+throw new UnsupportedOperationException("Unsupport Operation " 
+ op);
+}
+return buildDataChangeRecord(tableId, schema, values, isDelete);
+}
+
+private DataChangeRecord buildDataChangeRecord(
+TableId tableId, Schema schema, Object[] values, boolean isDelete) 
{
+com.oceanbase.connector.flink.table.TableId oceanBaseTableId =
+new com.oceanbase.connector.flink.table.TableId(
+tableId.getSchemaName(), tableId.getTableName());

Review Comment:
   Better to add null check for tableId.getSchemaName() here.



##

Re: [PR] [FLINK-30400][docs] Add some notes about changes for flink-connector-base dependency in externalized connectors [flink]

2024-06-12 Thread via GitHub


flinkbot commented on PR #24930:
URL: https://github.com/apache/flink/pull/24930#issuecomment-2164316211

   
   ## CI report:
   
   * ace19bbeee9cc5e5f8ecf784aa3ed5e55d073f7f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [mysql] Mysql-cdc adapt mariadb. [flink-cdc]

2024-06-12 Thread via GitHub


linmingze commented on PR #2494:
URL: https://github.com/apache/flink-cdc/pull/2494#issuecomment-2164291002

   hello 
   en how about this fix going 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-30400][docs] Add some notes about changes for flink-connector-base dependency in externalized connectors [flink]

2024-06-12 Thread via GitHub


ruanhang1993 commented on PR #24930:
URL: https://github.com/apache/flink/pull/24930#issuecomment-2164284187

   @rmetzger @leonardBang Would you like to help to review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-30400][docs] Add some notes about changes for flink-connector-base dependency in externalized connectors [flink]

2024-06-12 Thread via GitHub


ruanhang1993 opened a new pull request, #24930:
URL: https://github.com/apache/flink/pull/24930

   ## What is the purpose of the change
   
   This pull request adds some notes about changes for flink-connector-base 
dependency in externalized connectors.
   
   ## Brief change log
   
   Add some notes about changes for flink-connector-base dependency in 
externalized connectors docs.
   
   ## Verifying this change
   
   This change is a docs work without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35157][runtime] Sources with watermark alignment get stuck once some subtasks finish [flink]

2024-06-12 Thread via GitHub


elon-X commented on code in PR #24757:
URL: https://github.com/apache/flink/pull/24757#discussion_r1637436928


##
flink-runtime/src/test/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorAlignmentTest.java:
##
@@ -310,6 +310,53 @@ void testWatermarkAggregatorRandomly() {
 testWatermarkAggregatorRandomly(10, 1, true, false);
 }
 
+@Test
+void testWatermarkAlignmentWhileSubtaskFinished() throws Exception {
+long maxDrift = 1000L;
+WatermarkAlignmentParams params =
+new WatermarkAlignmentParams(maxDrift, "group1", maxDrift);
+
+final Source> 
mockSource =
+createMockSource();
+
+sourceCoordinator =
+new SourceCoordinator>(
+OPERATOR_NAME,
+mockSource,
+getNewSourceCoordinatorContext(),
+new CoordinatorStoreImpl(),
+params,
+null) {
+@Override
+void announceCombinedWatermark() {
+super.announceCombinedWatermark();
+}
+};
+
+sourceCoordinator.start();
+
+int subtask0 = 0;
+int subtask1 = 1;
+
+setReaderTaskReady(sourceCoordinator, subtask0, 0);
+setReaderTaskReady(sourceCoordinator, subtask1, 0);
+registerReader(subtask0);
+registerReader(subtask1);
+
+reportWatermarkEvent(sourceCoordinator, subtask0, 42);
+assertLatestWatermarkAlignmentEvent(subtask0, 1042);
+
+reportWatermarkEvent(sourceCoordinator, subtask1, 44);
+assertLatestWatermarkAlignmentEvent(subtask1, 1042);
+
+// mock noMoreSplits event
+assertHasNoMoreSplits(subtask0, true);

Review Comment:
   Updated, thanks for your patience.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35571) ProfilingServiceTest.testRollingDeletion intermittently fails due to improper test isolation

2024-06-12 Thread Sam Barker (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854602#comment-17854602
 ] 

Sam Barker commented on FLINK-35571:


The simplest possible fix for this is to add 

{{ProfilingService.getInstance(new Configuration()).close();}}
to setUp in ProfilingServiceTest. To ensure that the test runs with the 
configuration it requests. However what I'm not clear on is wether that has any 
knock on implications for any other tests running in parallel. Another possible 
test only fix is to expose an additional factory method that the test can use 
to construct its own profiling service. 

However is fixing this in the test just papering over a larger issue? 

As [~gracegrimwood] highlights there is an instance of the ProfilingService 
being used in a TaskExecutor via MiniCluster but not closed by either of those. 
Which leads me to wonder if MiniCluster should be triggering the closing of the 
ProfilingService? Is the lifecycle of the MiniCluster instance sufficiently 
wide to say that that a profilingService instance shouldn't outlast it?  

Other possible fixes would be to stop the ProfilingService being a global 
singleton and have it track a Map of config to instance (I think that works 
fine with having a global singleton for the AsyncProfiler instance). Though 
looking at the production code I don't think there is any need for instances 
with distinct config. 

Silently ignoring the config passed into getInstance seems a bit a of a smell 
to me. I'd personally be inclined to throw and exception when asking for an 
instance that has different configuration to the current instance. However I 
don't have a good enough mental model of how this is used to determine if that 
is really appropriate.

> ProfilingServiceTest.testRollingDeletion intermittently fails due to improper 
> test isolation
> 
>
> Key: FLINK-35571
> URL: https://issues.apache.org/jira/browse/FLINK-35571
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
> Environment: *Git revision:*
> {code:bash}
> $ git show
> commit b8d527166e095653ae3ff5c0431bf27297efe229 (HEAD -> master)
> {code}
> *Java info:*
> {code:bash}
> $ java -version
> openjdk version "17.0.11" 2024-04-16
> OpenJDK Runtime Environment Temurin-17.0.11+9 (build 17.0.11+9)
> OpenJDK 64-Bit Server VM Temurin-17.0.11+9 (build 17.0.11+9, mixed mode)
> {code}
> {code:bash}
> $ sdk current
> Using:
> java: 17.0.11-tem
> maven: 3.8.6
> scala: 2.12.19
> {code}
> *OS info:*
> {code:bash}
> $ uname -av
> Darwin MacBook-Pro 23.5.0 Darwin Kernel Version 23.5.0: Wed May  1 20:14:38 
> PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020 arm64
> {code}
> *Hardware info:*
> {code:bash}
> $ sysctl -a | grep -e 'machdep\.cpu\.brand_string\:' -e 
> 'machdep\.cpu\.core_count\:' -e 'hw\.memsize\:'
> hw.memsize: 34359738368
> machdep.cpu.core_count: 12
> machdep.cpu.brand_string: Apple M2 Pro
> {code}
>Reporter: Grace Grimwood
>Priority: Major
> Attachments: 20240612_181148_mvn-clean-package_flink-runtime.log
>
>
> *Symptom:*
> The test *{{ProfilingServiceTest.testRollingDeletion}}* fails with the 
> following error:
> {code:java}
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 25.32 
> s <<< FAILURE! -- in 
> org.apache.flink.runtime.util.profiler.ProfilingServiceTest
> [ERROR] 
> org.apache.flink.runtime.util.profiler.ProfilingServiceTest.testRollingDeletion
>  -- Time elapsed: 9.264 s <<< FAILURE!
> org.opentest4j.AssertionFailedError: expected: <3> but was: <6>
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
>   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
>   at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:531)
>   at 
> org.apache.flink.runtime.util.profiler.ProfilingServiceTest.verifyRollingDeletionWorks(ProfilingServiceTest.java:175)
>   at 
> org.apache.flink.runtime.util.profiler.ProfilingServiceTest.testRollingDeletion(ProfilingServiceTest.java:117)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
>   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
>   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
>   at 
> 

Re: [PR] [FLINK-35157][runtime] Sources with watermark alignment get stuck once some subtasks finish [flink]

2024-06-12 Thread via GitHub


1996fanrui commented on code in PR #24757:
URL: https://github.com/apache/flink/pull/24757#discussion_r1637386871


##
flink-runtime/src/test/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorAlignmentTest.java:
##
@@ -310,6 +310,53 @@ void testWatermarkAggregatorRandomly() {
 testWatermarkAggregatorRandomly(10, 1, true, false);
 }
 
+@Test
+void testWatermarkAlignmentWhileSubtaskFinished() throws Exception {
+long maxDrift = 1000L;
+WatermarkAlignmentParams params =
+new WatermarkAlignmentParams(maxDrift, "group1", maxDrift);
+
+final Source> 
mockSource =
+createMockSource();
+
+sourceCoordinator =
+new SourceCoordinator>(
+OPERATOR_NAME,
+mockSource,
+getNewSourceCoordinatorContext(),
+new CoordinatorStoreImpl(),
+params,
+null) {
+@Override
+void announceCombinedWatermark() {
+super.announceCombinedWatermark();
+}
+};
+
+sourceCoordinator.start();
+
+int subtask0 = 0;
+int subtask1 = 1;
+
+setReaderTaskReady(sourceCoordinator, subtask0, 0);
+setReaderTaskReady(sourceCoordinator, subtask1, 0);
+registerReader(subtask0);
+registerReader(subtask1);
+
+reportWatermarkEvent(sourceCoordinator, subtask0, 42);
+assertLatestWatermarkAlignmentEvent(subtask0, 1042);
+
+reportWatermarkEvent(sourceCoordinator, subtask1, 44);
+assertLatestWatermarkAlignmentEvent(subtask1, 1042);
+
+// mock noMoreSplits event
+assertHasNoMoreSplits(subtask0, true);

Review Comment:
   > My intention is to add a test case to simulate the sending of the 
noMoreSplits event
   
   Actually, this test is testing the watermark result when the watermark of 
one subtask is Long.Max, right?
   
   I think this test can be removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34470][Connectors/Kafka] Fix indefinite blocking by adjusting stopping condition in split reader [flink-connector-kafka]

2024-06-12 Thread via GitHub


dongwoo6kim commented on PR #100:
URL: 
https://github.com/apache/flink-connector-kafka/pull/100#issuecomment-2164206983

   Thanks for confirming @morazow, 
   Please feel free to provide any additional advice before merging this fix. 
   It would be also helpful if you could elaborate more on the issue you 
mentioned and consider adding relevant test code for it. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34990][cdc-connector][oracle] Oracle cdc support newly add table [flink-cdc]

2024-06-12 Thread via GitHub


gong commented on PR #3203:
URL: https://github.com/apache/flink-cdc/pull/3203#issuecomment-2164201064

   @leonardBang CI failure is not related to PR. Please help rerun CI.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25537] [JUnit5 Migration] Module: flink-core with,Package: core [flink]

2024-06-12 Thread via GitHub


1996fanrui commented on PR #24881:
URL: https://github.com/apache/flink/pull/24881#issuecomment-2164197766

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34977) FLIP-433: State Access on DataStream API V2

2024-06-12 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-34977.
--
Fix Version/s: 1.20.0
   Resolution: Done

master(1.20) via 03b3d26d8fe4c1872969d6fb22a0c3f66ae2c552.

> FLIP-433: State Access on DataStream API V2
> ---
>
> Key: FLINK-34977
> URL: https://issues.apache.org/jira/browse/FLINK-34977
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, API / State Processor
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the umbrella ticket for FLIP-433: State Access on DataStream API V2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35586) Detected conflict when using Paimon as pipeline sink with parallelism > 1

2024-06-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35586:
---
Labels: pull-request-available  (was: )

> Detected conflict when using Paimon as pipeline sink with parallelism > 1
> -
>
> Key: FLINK-35586
> URL: https://issues.apache.org/jira/browse/FLINK-35586
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: LvYanquan
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> When submit FlinkCDC pipeline job using yaml like:
> {code:java}
> source:
>   type: mysql
>   name: MySQL Source
>   hostname: 127.0.0.1
>   port: 3306
>   username: root
>   password: 123456
>   tables: inventory.t1
> sink:
>   type: paimon
>   name: Paimon Sink
>   catalog.properties.metastore: filesystem
>   catalog.properties.warehouse: /mypath
> pipeline:
>   name: MySQL to Paimon Pipeline
>   parallelism: 2 {code}
> I met the following error message: 
> {code:java}
> Caused by: java.lang.RuntimeException: LSM conflicts detected! Give up 
> committing. Conflict files are:, bucket 0, level 5, file 
> data-6bcac56a-2df2-4c85-97f2-2db91f6d8099-0.orc, bucket 0, level 5, file 
> data-351fd27d-4a65-4354-9ce9-c153ba715569-0.orc {code}
> And this will cause the task to constantly restart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34977][API] Introduce State Access on DataStream API V2 [flink]

2024-06-12 Thread via GitHub


reswqa closed pull request #24725: [FLINK-34977][API] Introduce State Access on 
DataStream API V2
URL: https://github.com/apache/flink/pull/24725


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-35586] shuffle Event of the same bucket to the same subtask to avoid conflict. [flink-cdc]

2024-06-12 Thread via GitHub


lvyanquan opened a new pull request, #3413:
URL: https://github.com/apache/flink-cdc/pull/3413

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26951) Add HASH supported in SQL & Table API

2024-06-12 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854587#comment-17854587
 ] 

lincoln lee commented on FLINK-26951:
-

[~kartikeypant] Since Hive has already deprecated it, I don't seem the need to 
add this hash function that doesn't specify a specific hash algorithm for now, 
we can try to complete other unimplemented functions and improvements, WDYT?

> Add HASH supported in SQL & Table API
> -
>
> Key: FLINK-26951
> URL: https://issues.apache.org/jira/browse/FLINK-26951
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.20.0
>
>
> Returns a hash value of the arguments.
> Syntax:
> {code:java}
> hash(expr1, ...) {code}
> Arguments:
>  * {{{}exprN{}}}: An expression of any type.
> Returns:
> An INTEGER.
> Examples:
> {code:java}
> > SELECT hash('Flink', array(123), 2);
>  -1321691492 {code}
> See more:
>  * [Spark|https://spark.apache.org/docs/latest/api/sql/index.html#hash]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35473) FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-06-12 Thread Xuannan Su (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854586#comment-17854586
 ] 

Xuannan Su commented on FLINK-35473:


 [~qingyue] Can you please include in the release notes information on what's 
deprecated, and what users should be using?

> FLIP-457: Improve Table/SQL Configuration for Flink 2.0
> ---
>
> Key: FLINK-35473
> URL: https://issues.apache.org/jira/browse/FLINK-35473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the parent task for 
> [FLIP-457|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35586) Detected conflict when using Paimon as pipeline sink with parallelism > 1

2024-06-12 Thread LvYanquan (Jira)
LvYanquan created FLINK-35586:
-

 Summary: Detected conflict when using Paimon as pipeline sink with 
parallelism > 1
 Key: FLINK-35586
 URL: https://issues.apache.org/jira/browse/FLINK-35586
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.1.0
Reporter: LvYanquan
 Fix For: cdc-3.2.0


When submit FlinkCDC pipeline job using yaml like:
{code:java}
source:
  type: mysql
  name: MySQL Source
  hostname: 127.0.0.1
  port: 3306
  username: root
  password: 123456
  tables: inventory.t1

sink:
  type: paimon
  name: Paimon Sink
  catalog.properties.metastore: filesystem
  catalog.properties.warehouse: /mypath

pipeline:
  name: MySQL to Paimon Pipeline
  parallelism: 2 {code}
I met the following error message: 
{code:java}
Caused by: java.lang.RuntimeException: LSM conflicts detected! Give up 
committing. Conflict files are:, bucket 0, level 5, file 
data-6bcac56a-2df2-4c85-97f2-2db91f6d8099-0.orc, bucket 0, level 5, file 
data-351fd27d-4a65-4354-9ce9-c153ba715569-0.orc {code}
And this will cause the task to constantly restart.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35585] Add documentation for distribution [flink]

2024-06-12 Thread via GitHub


flinkbot commented on PR #24929:
URL: https://github.com/apache/flink/pull/24929#issuecomment-2163969225

   
   ## CI report:
   
   * 401ad021195bca0d5329723dfb492760f4a147c8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35585) Add documentation for distribution

2024-06-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35585:
---
Labels: pull-request-available  (was: )

> Add documentation for distribution
> --
>
> Key: FLINK-35585
> URL: https://issues.apache.org/jira/browse/FLINK-35585
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>
> Add documentation for ALTER TABLE, CREATE TABLE, and the sink abilities.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-35585] Add documentation for distribution [flink]

2024-06-12 Thread via GitHub


jnh5y opened a new pull request, #24929:
URL: https://github.com/apache/flink/pull/24929

   ## What is the purpose of the change
   
   Add documentation to cover FLIP-376.
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35585) Add documentation for distribution

2024-06-12 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-35585:
--

Assignee: Jim Hughes

> Add documentation for distribution
> --
>
> Key: FLINK-35585
> URL: https://issues.apache.org/jira/browse/FLINK-35585
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> Add documentation for ALTER TABLE, CREATE TABLE, and the sink abilities.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35585) Add documentation for distribution

2024-06-12 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35585:
--

 Summary: Add documentation for distribution
 Key: FLINK-35585
 URL: https://issues.apache.org/jira/browse/FLINK-35585
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes


Add documentation for ALTER TABLE, CREATE TABLE, and the sink abilities.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33759] [flink-parquet] Add support for nested array with row type [flink]

2024-06-12 Thread via GitHub


ukby1234 closed pull request #24029: [FLINK-33759] [flink-parquet] Add support 
for nested array with row type
URL: https://github.com/apache/flink/pull/24029


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


MartijnVisser commented on PR #24805:
URL: https://github.com/apache/flink/pull/24805#issuecomment-2163842389

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] [docs] reference.md: Add missing FlinkSessionJob CRD [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


mattayes closed pull request #837: [hotfix] [docs] reference.md: Add missing 
FlinkSessionJob CRD
URL: https://github.com/apache/flink-kubernetes-operator/pull/837


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] [docs] reference.md: Add missing FlinkSessionJob CRD [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


mattayes commented on PR #837:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/837#issuecomment-2163720507

   @gyfora Thanks for clarifying, I'll make the changes there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][checkpointing] Normalize file-merging sub dir [flink]

2024-06-12 Thread via GitHub


flinkbot commented on PR #24928:
URL: https://github.com/apache/flink/pull/24928#issuecomment-2163701730

   
   ## CI report:
   
   * a0719f7e58a3666e9ea39ce673c47a58679c38e6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix][checkpointing] Normalize file-merging sub dir [flink]

2024-06-12 Thread via GitHub


zoltar9264 opened a new pull request, #24928:
URL: https://github.com/apache/flink/pull/24928

   ## What is the purpose of the change
   
   As title said, normalize file-merging sub dir, including subtask dir and 
exclusive dir.
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28915) Make application mode could support remote DFS schema(e.g. S3, OSS, HDFS, etc.)

2024-06-12 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854505#comment-17854505
 ] 

Robert Metzger commented on FLINK-28915:


Awesome, thanks a lot. 

> Make application mode could support remote DFS schema(e.g. S3, OSS, HDFS, 
> etc.)
> ---
>
> Key: FLINK-28915
> URL: https://issues.apache.org/jira/browse/FLINK-28915
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, flink-contrib
>Reporter: hjw
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> As the Flink document show , local is the only supported scheme in Native k8s 
> deployment.
> Is there have a plan to support s3 filesystem? thx.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-12 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan resolved FLINK-35569.
-
Resolution: Fixed

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Zakelly Lan
>Priority: Major
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35570) Consider PlaceholderStreamStateHandle in checkpoint file merging

2024-06-12 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan resolved FLINK-35570.
-
Resolution: Fixed

> Consider PlaceholderStreamStateHandle in checkpoint file merging
> 
>
> Key: FLINK-35570
> URL: https://issues.apache.org/jira/browse/FLINK-35570
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>
> In checkpoint file merging, we should take {{PlaceholderStreamStateHandle}} 
> into account during lifecycle, since it can be a file merged one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35570) Consider PlaceholderStreamStateHandle in checkpoint file merging

2024-06-12 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854477#comment-17854477
 ] 

Zakelly Lan commented on FLINK-35570:
-

master: fb5fd483f91f524c81130c0fc4125d3382a7ffdc

> Consider PlaceholderStreamStateHandle in checkpoint file merging
> 
>
> Key: FLINK-35570
> URL: https://issues.apache.org/jira/browse/FLINK-35570
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
>
> In checkpoint file merging, we should take {{PlaceholderStreamStateHandle}} 
> into account during lifecycle, since it can be a file merged one.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-12 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854480#comment-17854480
 ] 

Zakelly Lan commented on FLINK-35569:
-

Fixed by FLINK-35570,
Newest run: https://github.com/apache/flink/actions/runs/9483382674

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Zakelly Lan
>Priority: Major
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35371][security] Add configuration for SSL keystore and truststore type [flink]

2024-06-12 Thread via GitHub


ammar-master commented on PR #24919:
URL: https://github.com/apache/flink/pull/24919#issuecomment-2163411240

   @gaborgsomogyi Can this go in before the feature freeze on June 15?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34977][API] Introduce State Access on DataStream API V2 [flink]

2024-06-12 Thread via GitHub


jeyhunkarimov commented on PR #24725:
URL: https://github.com/apache/flink/pull/24725#issuecomment-2163282560

   Thanks a lot @reswqa 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636593978


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/PrintSinkFunction.java:
##
@@ -33,7 +33,10 @@
  * no {@code sinkIdentifier} provided, parallelism == 1
  *
  * @param  Input record type
+ * @deprecated This interface will be removed in future versions. Use the new 
{@link PrintSink}
+ * interface instead.

Review Comment:
   ~~I've doubled checked: `PrintSink` implements `Sink` interface which is 
going to be deprecated in this PR~~
   ~~So seems we can't propose to use it.~~
   
   ~~and may be mark `PrintSink` as well deprecated~~
   
   ~~may be just reference to `org.apache.flink.api.connector.sink2.Sink`~~
   
   UPD: my bad, it already uses v2 version



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


MartijnVisser commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636642416


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/PrintSinkFunction.java:
##
@@ -33,7 +33,10 @@
  * no {@code sinkIdentifier} provided, parallelism == 1
  *
  * @param  Input record type
+ * @deprecated This interface will be removed in future versions. Use the new 
{@link PrintSink}
+ * interface instead.

Review Comment:
   Verified that `PrintSink` actually uses `Sink2` and not `Sink`. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


caicancai commented on PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#issuecomment-2163186911

   @1996fanrui I'm very sorry for the late revision. I've been a bit busy 
recently. I'll check it again tomorrow. You don't need to review it yet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


caicancai commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1636594307


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -125,4 +124,4 @@ arr1: [{name: a, p2: v2}, {name: c, p2: v2}]
 merged: [{name: a, p1: v1, p2: v2}, {name: b, p1: v1}, {name: c, p2: v2}]
 ```
 
-Merging by name can we be very convenient when merging container specs or when 
the base and override templates are not defined together.
+当合并容器规范或者当基础模板和覆盖模板没有一起定义时,按名称合并可以非常方便。

Review Comment:
   yes



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636593978


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/PrintSinkFunction.java:
##
@@ -33,7 +33,10 @@
  * no {@code sinkIdentifier} provided, parallelism == 1
  *
  * @param  Input record type
+ * @deprecated This interface will be removed in future versions. Use the new 
{@link PrintSink}
+ * interface instead.

Review Comment:
   I've doubled checked: `PrintSink` implements `Sink` interface which is going 
to be deprecated in this PR
   So seems we can't propose to use it.
   
   and may be mark `PrintSink` as well deprecated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636593978


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/PrintSinkFunction.java:
##
@@ -33,7 +33,10 @@
  * no {@code sinkIdentifier} provided, parallelism == 1
  *
  * @param  Input record type
+ * @deprecated This interface will be removed in future versions. Use the new 
{@link PrintSink}
+ * interface instead.

Review Comment:
   I've doubled checked: `PrintSink` implements `Sink` interface which is going 
to be deprecated in this PR
   So seems we can't propose to use it.
   
   and may be mark `PrintSink` as well deprecated
   
   may be just reference to `org.apache.flink.api.connector.sink2.Sink`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


caicancai commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1636593737


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -113,10 +112,10 @@ arr1: [{name: a, p2: v2}, {name: c, p2: v2}]
 merged: [{name: a, p1: v1, p2: v2}, {name: c, p1: v1, p2: v2}]
 ```
 
-The operator supports an alternative array merging mechanism that can be 
enabled by the `kubernetes.operator.pod-template.merge-arrays-by-name` flag.
-When true, instead of the default positional merging, object array elements 
that have a `name` property defined will be merged by their name and the 
resulting array will be a union of the two input arrays.
+Operator 支持另一种数组合并机制,可以通过 
`kubernetes.operator.pod-template.merge-arrays-by-name` 标志启用。
+当为 true 时,将按名称合并具有 `name` 属性定义的对象数组元素,并且生成的数组将是两个输入数组的并集。

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


caicancai commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1636593004


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -93,16 +90,18 @@ spec:
 ```
 
 {{< hint info >}}
-When using the operator with Flink native Kubernetes integration, please refer 
to [pod template field precedence](
-https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink).
+当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级](
+https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink)。
 {{< /hint >}}
 
+
 ## Array Merging Behaviour
 
-When layering pod templates (defining both a top level and jobmanager specific 
podtemplate for example) the corresponding yamls are merged together.
+
+
+当分层 pod templates(例如同时定义顶级和任务管理器特定的 pod 模板)时,相应的 yaml 会合并在一起。
 
-The default behaviour of the pod template mechanism is to merge array arrays 
by merging the objects in the respective array positions.
-This requires that containers in the podTemplates are defined in the same 
order otherwise the results may be undefined.
+Pod 模板机制的默认行为是通过合并相应数组位置的对象合并 json 类型的数组。

Review Comment:
   Because I'm sure array arrays translates to
   



##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -93,16 +90,18 @@ spec:
 ```
 
 {{< hint info >}}
-When using the operator with Flink native Kubernetes integration, please refer 
to [pod template field precedence](
-https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink).
+当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级](
+https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink)。
 {{< /hint >}}
 
+
 ## Array Merging Behaviour
 
-When layering pod templates (defining both a top level and jobmanager specific 
podtemplate for example) the corresponding yamls are merged together.
+
+
+当分层 pod templates(例如同时定义顶级和任务管理器特定的 pod 模板)时,相应的 yaml 会合并在一起。
 
-The default behaviour of the pod template mechanism is to merge array arrays 
by merging the objects in the respective array positions.
-This requires that containers in the podTemplates are defined in the same 
order otherwise the results may be undefined.

Review Comment:
   fix



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


caicancai commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1636591874


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -26,21 +26,18 @@ under the License.
 
 # Pod template
 
-The operator CRD is designed to have a minimal set of direct, short-hand CRD 
settings to express the most
-basic attributes of a deployment. For all other settings the CRD provides the 
`flinkConfiguration` and
-`podTemplate` fields.
+
 
-Pod templates permit customization of the Flink job and task manager pods, for 
example to specify
-volume mounts, ephemeral storage, sidecar containers etc.
+Operator CRD 被设计为一组直接、简短的 CRD 设置,以表达部署的最基本属性。对于所有其他设置,CRD 提供了 
`flinkConfiguration` 和 `podTemplate` 字段。
 
-Pod templates can be layered, as shown in the example below.
-A common pod template may hold the settings that apply to both job and task 
manager,
-like `volumeMounts`. Another template under job or task manager can define 
additional settings that supplement or override those
-in the common template, such as a task manager sidecar.
+Pod templates 保证了 Flink 作业和任务管理器 pod 的自定义,例如指定卷挂载、临时存储、sidecar 容器等。
 
-The operator is going to merge the common and specific templates for job and 
task manager respectively.
+Pod template 可以被分层,如下面的示例所示。
+一个通用的 pod template 可以保存适用于作业和任务管理器的设置,比如 
`volumeMounts`。作业或任务管理器下的另一个模板可以定义补充或覆盖通用模板中的其他设置,比如一个任务管理器 sidecar。
 
-Here the full example:
+Operator 将分别合并作业和任务管理器的通用和特定模板。

Review Comment:
   Thanks for the reminder, I'll pay attention next time



##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -93,16 +90,18 @@ spec:
 ```
 
 {{< hint info >}}
-When using the operator with Flink native Kubernetes integration, please refer 
to [pod template field precedence](
-https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink).
+当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级](

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-06-12 Thread via GitHub


caicancai commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1636591482


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -26,21 +26,18 @@ under the License.
 
 # Pod template
 
-The operator CRD is designed to have a minimal set of direct, short-hand CRD 
settings to express the most
-basic attributes of a deployment. For all other settings the CRD provides the 
`flinkConfiguration` and
-`podTemplate` fields.
+
 
-Pod templates permit customization of the Flink job and task manager pods, for 
example to specify
-volume mounts, ephemeral storage, sidecar containers etc.
+Operator CRD 被设计为一组直接、简短的 CRD 设置,以表达部署的最基本属性。对于所有其他设置,CRD 提供了 
`flinkConfiguration` 和 `podTemplate` 字段。

Review Comment:
   Thanks for the reminder, I'll pay attention next time



##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -26,21 +26,18 @@ under the License.
 
 # Pod template
 
-The operator CRD is designed to have a minimal set of direct, short-hand CRD 
settings to express the most
-basic attributes of a deployment. For all other settings the CRD provides the 
`flinkConfiguration` and
-`podTemplate` fields.
+
 
-Pod templates permit customization of the Flink job and task manager pods, for 
example to specify
-volume mounts, ephemeral storage, sidecar containers etc.
+Operator CRD 被设计为一组直接、简短的 CRD 设置,以表达部署的最基本属性。对于所有其他设置,CRD 提供了 
`flinkConfiguration` 和 `podTemplate` 字段。
 
-Pod templates can be layered, as shown in the example below.
-A common pod template may hold the settings that apply to both job and task 
manager,
-like `volumeMounts`. Another template under job or task manager can define 
additional settings that supplement or override those
-in the common template, such as a task manager sidecar.
+Pod templates 保证了 Flink 作业和任务管理器 pod 的自定义,例如指定卷挂载、临时存储、sidecar 容器等。

Review Comment:
   Thanks for the reminder, I'll pay attention next time



##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -26,21 +26,18 @@ under the License.
 
 # Pod template
 
-The operator CRD is designed to have a minimal set of direct, short-hand CRD 
settings to express the most
-basic attributes of a deployment. For all other settings the CRD provides the 
`flinkConfiguration` and
-`podTemplate` fields.
+
 
-Pod templates permit customization of the Flink job and task manager pods, for 
example to specify
-volume mounts, ephemeral storage, sidecar containers etc.
+Operator CRD 被设计为一组直接、简短的 CRD 设置,以表达部署的最基本属性。对于所有其他设置,CRD 提供了 
`flinkConfiguration` 和 `podTemplate` 字段。
 
-Pod templates can be layered, as shown in the example below.
-A common pod template may hold the settings that apply to both job and task 
manager,
-like `volumeMounts`. Another template under job or task manager can define 
additional settings that supplement or override those
-in the common template, such as a task manager sidecar.
+Pod templates 保证了 Flink 作业和任务管理器 pod 的自定义,例如指定卷挂载、临时存储、sidecar 容器等。
 
-The operator is going to merge the common and specific templates for job and 
task manager respectively.
+Pod template 可以被分层,如下面的示例所示。
+一个通用的 pod template 可以保存适用于作业和任务管理器的设置,比如 
`volumeMounts`。作业或任务管理器下的另一个模板可以定义补充或覆盖通用模板中的其他设置,比如一个任务管理器 sidecar。

Review Comment:
   Thanks for the reminder, I'll pay attention next time



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


MartijnVisser commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636585255


##
pom.xml:
##
@@ -2371,6 +2371,11 @@ under the License.

org.apache.flink.util.function.SerializableFunction

org.apache.flink.util.function.SupplierWithException

org.apache.flink.util.function.ThrowingConsumer
+   

Review Comment:
   I've updated the PR; I'll squash the commits when merging, but added them as 
separate commits to this PR



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34172] Add support for altering a distribution via ALTER TABLE [flink]

2024-06-12 Thread via GitHub


jnh5y commented on code in PR #24886:
URL: https://github.com/apache/flink/pull/24886#discussion_r1636579991


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlDdlToOperationConverterTest.java:
##
@@ -1361,16 +1374,36 @@ public void testAlterTableDropConstraint() throws 
Exception {
 .getUnresolvedSchema()
 .getPrimaryKey())
 .isNotPresent();
+}
+
+@Test
+public void testAlterTableDropDistribution() throws Exception {
+prepareNonManagedTableWithDistribution("tb1");
+String expectedSummaryString = "ALTER TABLE cat1.db1.tb1\n  DROP 
DISTRIBUTION";
 
-operation = parse("alter table tb1 drop primary key");
+Operation operation = parse("alter table tb1 drop distribution");
 assertThat(operation).isInstanceOf(AlterTableChangeOperation.class);
 
assertThat(operation.asSummaryString()).isEqualTo(expectedSummaryString);
-assertThat(
-((AlterTableChangeOperation) operation)
-.getNewTable()
-.getUnresolvedSchema()
-.getPrimaryKey())
+assertThat(((AlterTableChangeOperation) 
operation).getNewTable().getDistribution())
 .isNotPresent();
+
+prepareNonManagedTableWithDistribution("tb3");
+// rename column used as distribution key

Review Comment:
   Removing this case of dropping the distribution.  The other two cases are 
for a drop and modify, so I think both of those make sense.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636579070


##
pom.xml:
##
@@ -2371,6 +2371,11 @@ under the License.

org.apache.flink.util.function.SerializableFunction

org.apache.flink.util.function.SupplierWithException

org.apache.flink.util.function.ThrowingConsumer
+   

Review Comment:
   all others are good to me



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


MartijnVisser commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636572746


##
pom.xml:
##
@@ -2371,6 +2371,11 @@ under the License.

org.apache.flink.util.function.SerializableFunction

org.apache.flink.util.function.SupplierWithException

org.apache.flink.util.function.ThrowingConsumer
+   

Review Comment:
   I think it makes sense to also mark those as deprecated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636561747


##
pom.xml:
##
@@ -2371,6 +2371,11 @@ under the License.

org.apache.flink.util.function.SerializableFunction

org.apache.flink.util.function.SupplierWithException

org.apache.flink.util.function.ThrowingConsumer
+   

Review Comment:
   A question about `DiscardingSink`, `PrintSinkFunction`, `RichSinkFunction`
   should they also be marked as deprecated?
   
   I'm asking since `SinkFunction` is marked as deprecated and those 3 classes 
implement `SinkFunction`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


MartijnVisser commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636516611


##
pom.xml:
##
@@ -2371,6 +2371,11 @@ under the License.

org.apache.flink.util.function.SerializableFunction

org.apache.flink.util.function.SupplierWithException

org.apache.flink.util.function.ThrowingConsumer
+   

Review Comment:
     Yep it is. I'll fix it; is there anything else? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Update flink-operations-playground.md for a typo [flink]

2024-06-12 Thread via GitHub


flinkbot commented on PR #24927:
URL: https://github.com/apache/flink/pull/24927#issuecomment-2163075264

   
   ## CI report:
   
   * 658b68a6bb0f6227fa16f7e990ee67db64e471e9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28915) Make application mode could support remote DFS schema(e.g. S3, OSS, HDFS, etc.)

2024-06-12 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854440#comment-17854440
 ] 

Ferenc Csaky commented on FLINK-28915:
--

I did not test it with the operator, but my guess would be it should work. I do 
not know how exactly the {{spec.job.jarURI}} value gets passed and what code 
path will it flow through, maybe some further adjustments would be required, or 
I overlooked some possibilities. I can give it thorough check by EOW.

> Make application mode could support remote DFS schema(e.g. S3, OSS, HDFS, 
> etc.)
> ---
>
> Key: FLINK-28915
> URL: https://issues.apache.org/jira/browse/FLINK-28915
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, flink-contrib
>Reporter: hjw
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> As the Flink document show , local is the only supported scheme in Native k8s 
> deployment.
> Is there have a plan to support s3 filesystem? thx.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34172] Add support for altering a distribution via ALTER TABLE [flink]

2024-06-12 Thread via GitHub


twalthr commented on code in PR #24886:
URL: https://github.com/apache/flink/pull/24886#discussion_r1636474722


##
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlDistribution.java:
##
@@ -70,17 +70,28 @@ public List getOperandList() {
 
 @Override
 public void unparse(SqlWriter writer, int leftPrec, int rightPrec) {
-writer.newlineAndIndent();
+unparse(writer, leftPrec, rightPrec, "DISTRIBUTED", true);
+}
+
+public void unparseAlter(SqlWriter writer, int leftPrec, int rightPrec) {
+unparse(writer, leftPrec, rightPrec, "DISTRIBUTION", false);
+}
+
+public void unparse(

Review Comment:
   can be private



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlDdlToOperationConverterTest.java:
##
@@ -2208,6 +2241,59 @@ public void testAlterTableModifyPk() throws Exception {
 .build());
 }
 
+@Test
+public void testAlterTableAddDistribution() throws Exception {
+prepareNonManagedTable("tb1", false);
+
+Operation operation = parse("alter table tb1 add distribution by 
hash(a) into 12 buckets");
+ObjectIdentifier tableIdentifier = ObjectIdentifier.of("cat1", "db1", 
"tb1");
+assertAlterTableDistribution(
+operation,
+tableIdentifier,
+TableDistribution.ofHash(Collections.singletonList("a"), 12),
+"ALTER TABLE cat1.db1.tb1\n" + "  ADD DISTRIBUTED BY HASH(`a`) 
INTO 12 BUCKETS\n");
+}
+
+@Test
+public void testFailedToAlterTableAddDistribution() throws Exception {
+prepareNonManagedTableWithDistribution("tb1");
+
+// modify watermark on a table without watermark

Review Comment:
   incorrect comment



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlDdlToOperationConverterTest.java:
##
@@ -2208,6 +2241,59 @@ public void testAlterTableModifyPk() throws Exception {
 .build());
 }
 
+@Test
+public void testAlterTableAddDistribution() throws Exception {
+prepareNonManagedTable("tb1", false);
+
+Operation operation = parse("alter table tb1 add distribution by 
hash(a) into 12 buckets");
+ObjectIdentifier tableIdentifier = ObjectIdentifier.of("cat1", "db1", 
"tb1");
+assertAlterTableDistribution(
+operation,
+tableIdentifier,
+TableDistribution.ofHash(Collections.singletonList("a"), 12),
+"ALTER TABLE cat1.db1.tb1\n" + "  ADD DISTRIBUTED BY HASH(`a`) 
INTO 12 BUCKETS\n");
+}
+
+@Test
+public void testFailedToAlterTableAddDistribution() throws Exception {
+prepareNonManagedTableWithDistribution("tb1");
+
+// modify watermark on a table without watermark
+assertThatThrownBy(
+() -> parse("alter table tb1 add distribution by 
hash(a) into 12 buckets"))
+.isInstanceOf(ValidationException.class)
+.hasMessageContaining("You can modify it or drop it before 
adding a new one.");
+}
+
+@Test
+public void testFailedToAlterTableModifyDistribution() throws Exception {
+prepareNonManagedTable("tb2", false);
+
+// modify watermark on a table without watermark

Review Comment:
   incorrect comment



##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/operations/SqlDdlToOperationConverterTest.java:
##
@@ -1361,16 +1374,36 @@ public void testAlterTableDropConstraint() throws 
Exception {
 .getUnresolvedSchema()
 .getPrimaryKey())
 .isNotPresent();
+}
+
+@Test
+public void testAlterTableDropDistribution() throws Exception {
+prepareNonManagedTableWithDistribution("tb1");
+String expectedSummaryString = "ALTER TABLE cat1.db1.tb1\n  DROP 
DISTRIBUTION";
 
-operation = parse("alter table tb1 drop primary key");
+Operation operation = parse("alter table tb1 drop distribution");
 assertThat(operation).isInstanceOf(AlterTableChangeOperation.class);
 
assertThat(operation.asSummaryString()).isEqualTo(expectedSummaryString);
-assertThat(
-((AlterTableChangeOperation) operation)
-.getNewTable()
-.getUnresolvedSchema()
-.getPrimaryKey())
+assertThat(((AlterTableChangeOperation) 
operation).getNewTable().getDistribution())
 .isNotPresent();
+
+prepareNonManagedTableWithDistribution("tb3");
+// rename column used as distribution key

Review Comment:
   comment seems to be wrong, also isn't this tested above already, at least 
`is used as a distribution key` occurs two times in this 

[jira] [Closed] (FLINK-33338) Bump FRocksDB version

2024-06-12 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan closed FLINK-8.
-
Resolution: Duplicate

> Bump FRocksDB version
> -
>
> Key: FLINK-8
> URL: https://issues.apache.org/jira/browse/FLINK-8
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Piotr Nowojski
>Assignee: Roman Khachatryan
>Priority: Major
>
> We need to bump RocksDB in order to be able to use new IngestDB and ClipDB 
> commands.
> If some of the required changes haven't been merged to Facebook/RocksDB, we 
> should cherry-pick and include them in our FRocksDB fork.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33338) Bump FRocksDB version

2024-06-12 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854439#comment-17854439
 ] 

Roman Khachatryan commented on FLINK-8:
---

Thanks [~mayuehappy], makes sense.

I'll close this ticket then.

(I'm assuming the one you created covers both Flink and Frocksdb updates).

> Bump FRocksDB version
> -
>
> Key: FLINK-8
> URL: https://issues.apache.org/jira/browse/FLINK-8
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Piotr Nowojski
>Assignee: Roman Khachatryan
>Priority: Major
>
> We need to bump RocksDB in order to be able to use new IngestDB and ClipDB 
> commands.
> If some of the required changes haven't been merged to Facebook/RocksDB, we 
> should cherry-pick and include them in our FRocksDB fork.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35576) FRocksdb cherry pick IngestDB related commits

2024-06-12 Thread Yue Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yue Ma updated FLINK-35576:
---
Description: 
We support the API related to ingest DB in FRocksDb-8.10.0, but many of the 
fixes related to ingest DB were only integrated in the latest RocksDB version. 
So we need to add these fixed commit cherryclicks to FRocksDB.
Mainly include:
|*RocksDB Main Branch*|*Commit ID in FrocksDB-8.10.0*|*Plan*|
|https://github.com/facebook/rocksdb/pull/11646|44f0ff31c21164685a6cd25a2beb944767c39e46|
 |
|[https://github.com/facebook/rocksdb/pull/11868]|8e1adab5cecad129131a4eceabe645b9442acb9c|
 |
|https://github.com/facebook/rocksdb/pull/11811|3c27f56d0b7e359defbc25bf90061214c889f40b|
 |
|https://github.com/facebook/rocksdb/pull/11381|4d72f48e57cb0a95b67ff82c6e971f826750334e|
 |
|https://github.com/facebook/rocksdb/pull/11379|8d8eb0e77e13a3902d23fbda742dc47aa7bc418f|
 |
|https://github.com/facebook/rocksdb/pull/11378|fa878a01074fe039135e37720f669391d1663525|
 |
|https://github.com/facebook/rocksdb/pull/12219|183d80d7dc4ce339ab1b6796661d5879b7a40d6a|
 |
|https://github.com/facebook/rocksdb/pull/12328|ef430fc72407950f94ca2a4fbb2b15de7ae8ff4f|
 |
|https://github.com/facebook/rocksdb/pull/12602| |Fix in 
https://issues.apache.org/jira/browse/FLINK-35576|

  was:
We support the API related to ingest DB in FRocksDb-8.10.0, but many of the 
fixes related to ingest DB were only integrated in the latest RocksDB version. 
So we need to add these fixed commit cherryclicks to FRocksDB.
Mainly include:
[https://github.com/facebook/rocksdb/pull/11646]
[https://github.com/facebook/rocksdb/pull/11868]
[https://github.com/facebook/rocksdb/pull/11811]
[https://github.com/facebook/rocksdb/pull/11381]
[https://github.com/facebook/rocksdb/pull/11379]
[https://github.com/facebook/rocksdb/pull/11378]


> FRocksdb cherry pick IngestDB related commits
> -
>
> Key: FLINK-35576
> URL: https://issues.apache.org/jira/browse/FLINK-35576
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 2.0.0
>Reporter: Yue Ma
>Priority: Major
> Fix For: 2.0.0
>
>
> We support the API related to ingest DB in FRocksDb-8.10.0, but many of the 
> fixes related to ingest DB were only integrated in the latest RocksDB 
> version. So we need to add these fixed commit cherryclicks to FRocksDB.
> Mainly include:
> |*RocksDB Main Branch*|*Commit ID in FrocksDB-8.10.0*|*Plan*|
> |https://github.com/facebook/rocksdb/pull/11646|44f0ff31c21164685a6cd25a2beb944767c39e46|
>  |
> |[https://github.com/facebook/rocksdb/pull/11868]|8e1adab5cecad129131a4eceabe645b9442acb9c|
>  |
> |https://github.com/facebook/rocksdb/pull/11811|3c27f56d0b7e359defbc25bf90061214c889f40b|
>  |
> |https://github.com/facebook/rocksdb/pull/11381|4d72f48e57cb0a95b67ff82c6e971f826750334e|
>  |
> |https://github.com/facebook/rocksdb/pull/11379|8d8eb0e77e13a3902d23fbda742dc47aa7bc418f|
>  |
> |https://github.com/facebook/rocksdb/pull/11378|fa878a01074fe039135e37720f669391d1663525|
>  |
> |https://github.com/facebook/rocksdb/pull/12219|183d80d7dc4ce339ab1b6796661d5879b7a40d6a|
>  |
> |https://github.com/facebook/rocksdb/pull/12328|ef430fc72407950f94ca2a4fbb2b15de7ae8ff4f|
>  |
> |https://github.com/facebook/rocksdb/pull/12602| |Fix in 
> https://issues.apache.org/jira/browse/FLINK-35576|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35574) Setup base branch for FrocksDB-8.10

2024-06-12 Thread Yue Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yue Ma updated FLINK-35574:
---
Description: 
As the first part of FLINK-35573, we need to prepare a base branch for 
FRocksDB-8.10.0 first. Mainly, it needs to be checked out from version 8.10.0 
of the Rocksdb community. Then check pick the commit which used by Flink from 
FRocksDB-6.20.3 to 8.10.0


|*JIRA*|*FrocksDB-6.20.3*|*Commit ID in FrocksDB-8.10.0*|*Plan*|
|[[FLINK-10471] Add Apache Flink specific compaction filter to evict expired 
state which has 
time-to-live|https://github.com/ververica/frocksdb/commit/3da8249d50c8a3a6ea229f43890d37e098372786]|3da8249d50c8a3a6ea229f43890d37e098372786|d606c9450bef7d2a22c794f406d7940d9d2f29a4|Already
 in *FrocksDB-8.10.0*|
|+[[FLINK-19710] Revert implementation of PerfContext back to __thread to avoid 
performance 
regression|https://github.com/ververica/frocksdb/commit/d6f50f33064f1d24480dfb3c586a7bd7a7dbac01]+|d6f50f33064f1d24480dfb3c586a7bd7a7dbac01|
 |Fix in FLINK-35575|
|[FRocksDB release guide and helping 
scripts|https://github.com/ververica/frocksdb/commit/2673de8e5460af8d23c0c7e1fb0c3258ea283419]|2673de8e5460af8d23c0c7e1fb0c3258ea283419|b58ba05a380d9bf0c223bc707f14897ce392ce1b|Already
 in *FrocksDB-8.10.0*|
|+[Add content related to ARM building in the FROCKSDB-RELEASE 
documentation|https://github.com/ververica/frocksdb/commit/ec27ca01db5ff579dd7db1f70cf3a4677b63d589]+|ec27ca01db5ff579dd7db1f70cf3a4677b63d589|6cae002662a45131a0cd90dd84f5d3d3cb958713|Already
 in *FrocksDB-8.10.0*|
|[[FLINK-23756] Update FrocksDB release document with more 
info|https://github.com/ververica/frocksdb/commit/f75e983045f4b64958dc0e93e8b94a7cfd7663be]|f75e983045f4b64958dc0e93e8b94a7cfd7663be|bac6aeb6e012e19d9d5e3a5ee22b84c1e4a1559c|Already
 in *FrocksDB-8.10.0*|
|[Add support for Apple Silicon to RocksJava 
(#9254)|https://github.com/ververica/frocksdb/commit/dac2c60bc31b596f445d769929abed292878cac1]|dac2c60bc31b596f445d769929abed292878cac1|#9254|Already
 in *FrocksDB-8.10.0*|
|[Fix RocksJava releases for macOS 
(#9662)|https://github.com/ververica/frocksdb/commit/22637e11968a627a06a3ac8aa78126e3ae6d1368]|22637e11968a627a06a3ac8aa78126e3ae6d1368|#9662|Already
 in *FrocksDB-8.10.0*|
|+[Fix clang13 build error 
(#9374)|https://github.com/ververica/frocksdb/commit/a20fb9fa96af7b18015754cf44463e22fc123222]+|a20fb9fa96af7b18015754cf44463e22fc123222|#9374|Already
 in *FrocksDB-8.10.0*|
|+[[hotfix] Resolve brken make 
format|https://github.com/ververica/frocksdb/commit/cf0acdc08fb1b8397ef29f3b7dc7e0400107555e]+|7a87e0bf4d59cc48f40ce69cf7b82237c5e8170c|
 |Already in *FrocksDB-8.10.0*|
|+[Update circleci xcode version 
(#9405)|https://github.com/ververica/frocksdb/commit/f24393bdc8d44b79a9be7a58044e5fd01cf50df7]+|cf0acdc08fb1b8397ef29f3b7dc7e0400107555e|#9405|Already
 in *FrocksDB-8.10.0*|
|+[Upgrade to Ubuntu 20.04 in our CircleCI 
config|https://github.com/ververica/frocksdb/commit/1fecfda040745fc508a0ea0bcbb98c970f89ee3e]+|1fecfda040745fc508a0ea0bcbb98c970f89ee3e|
 |Fix in FLINK-35577|
|[Disable useless broken tests due to ci-image 
upgraded|https://github.com/ververica/frocksdb/commit/9fef987e988c53a33b7807b85a56305bd9dede81]|9fef987e988c53a33b7807b85a56305bd9dede81|
 |Fix in FLINK-35577|
|[[hotfix] Use zlib's fossils page to replace 
web.archive|https://github.com/ververica/frocksdb/commit/cbc35db93f312f54b49804177ca11dea44b4d98e]|cbc35db93f312f54b49804177ca11dea44b4d98e|8fff7bb9947f9036021f99e3463c9657e80b71ae|Already
 in *FrocksDB-8.10.0*|
|+[[hotfix] Change the resource request when running 
CI|https://github.com/ververica/frocksdb/commit/2ec1019fd0433cb8ea5365b58faa2262ea0014e9]+|2ec1019fd0433cb8ea5365b58faa2262ea0014e9|174639cf1e6080a8f8f37aec132b3a500428f913|Already
 in *FrocksDB-8.10.0*|
|{+}[[FLINK-30321] Upgrade ZLIB of FRocksDB to 1.2.13 
(|https://github.com/ververica/frocksdb/commit/3eac409606fcd9ce44a4bf7686db29c06c205039]{+}[#56|https://github.com/ververica/frocksdb/pull/56]
 
[)|https://github.com/ververica/frocksdb/commit/3eac409606fcd9ce44a4bf7686db29c06c205039]|3eac409606fcd9ce44a4bf7686db29c06c205039|
 |Fix in FLINK-35574|
|[fix(CompactionFilter): avoid expensive ToString call when not in 
Debug`|https://github.com/ververica/frocksdb/commit/698c9ca2c419c72145a2e6f5282a7860225b27a0]|698c9ca2c419c72145a2e6f5282a7860225b27a0|927b17e10d2112270ac30c4566238950baba4b7b|Already
 in *FrocksDB-8.10.0*|
|[[FLINK-30457] Add periodic_compaction_seconds option to 
RocksJava|https://github.com/ververica/frocksdb/commit/ebed4b1326ca4c5c684b46813bdcb1164a669da1]|ebed4b1326ca4c5c684b46813bdcb1164a669da1|#8579|Already
 in *FrocksDB-8.10.0*|
|[[hotfix] Add docs of how to upload ppc64le artifacts to 
s3|https://github.com/ververica/frocksdb/commit/de2ffe6ef0a11f856b89fb69a34bcdb4782130eb]|de2ffe6ef0a11f856b89fb69a34bcdb4782130eb|174639cf1e6080a8f8f37aec132b3a500428f913|Already
 in *FrocksDB-8.10.0*|
|[[FLINK-33811] Fix the 

Re: [PR] [FLINK-35378] [FLIP-453] Promote Unified Sink API V2 to Public and Deprecate SinkFunction [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24805:
URL: https://github.com/apache/flink/pull/24805#discussion_r1636475673


##
pom.xml:
##
@@ -2371,6 +2371,11 @@ under the License.

org.apache.flink.util.function.SerializableFunction

org.apache.flink.util.function.SupplierWithException

org.apache.flink.util.function.ThrowingConsumer
+   

Review Comment:
   ```suggestion

   ```
   i think one two is enough 樂 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Update flink-operations-playground.md for a typo [flink]

2024-06-12 Thread via GitHub


rohimsh opened a new pull request, #24927:
URL: https://github.com/apache/flink/pull/24927

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follow [the 
conventions for tests defined in our code quality 
guide](https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing).
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35570] Consider PlaceholderStreamStateHandle in checkpoint file merging [flink]

2024-06-12 Thread via GitHub


Zakelly closed pull request #24924: [FLINK-35570] Consider 
PlaceholderStreamStateHandle in checkpoint file merging
URL: https://github.com/apache/flink/pull/24924


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28915) Make application mode could support remote DFS schema(e.g. S3, OSS, HDFS, etc.)

2024-06-12 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854429#comment-17854429
 ] 

Robert Metzger commented on FLINK-28915:


Quick question on this feature: Is this supposed to work from the 
FlinkDeployment {{spec.job. jarURI}} field as well?

For me it doesn't seen to be working:
{code}
org.apache.flink.container.entrypoint.StandaloneApplicationClusterEntryPoint.main(StandaloneApplicationClusterEntryPoint.java:89)
 [flink-dist-1.19-x.jar:1.19-x]
Caused by: java.net.MalformedURLException: unknown protocol: s3
at java.net.URL.(URL.java:652) ~[?:?]
at java.net.URL.(URL.java:541) ~[?:?]
at java.net.URL.(URL.java:488) ~[?:?]
at 
org.apache.flink.configuration.ConfigUtils.decodeListFromConfig(ConfigUtils.java:133)
 ~[flink-dist-1.19-x.jar:1.19-x]
at 
org.apache.flink.client.program.DefaultPackagedProgramRetriever.getClasspathsFromConfiguration(DefaultPackagedProgramRetriever.java:273)
 ~[flink-dist-1.19-x.jar:1.19-x]
at 
org.apache.flink.client.program.DefaultPackagedProgramRetriever.create(DefaultPackagedProgramRetriever.java:121)
 ~[flink-dist-1.19-x.jar:1.19-x]
... 4 more
{code}

> Make application mode could support remote DFS schema(e.g. S3, OSS, HDFS, 
> etc.)
> ---
>
> Key: FLINK-28915
> URL: https://issues.apache.org/jira/browse/FLINK-28915
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, flink-contrib
>Reporter: hjw
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> As the Flink document show , local is the only supported scheme in Native k8s 
> deployment.
> Is there have a plan to support s3 filesystem? thx.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Improve execution.checkpointing.unaligned.interruptible-timers.enabled documentation [flink]

2024-06-12 Thread via GitHub


flinkbot commented on PR #24926:
URL: https://github.com/apache/flink/pull/24926#issuecomment-2162942962

   
   ## CI report:
   
   * b88ad00e20b5f2e76c3d7eb02c7adc3239faed8a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-20217) More fine-grained timer processing

2024-06-12 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-20217.
--
Release Note: 
Firing timers can now be interrupted to speed up checkpointing. Timers that 
were interrupted by a checkpoint, will be fired shortly after checkpoint 
completes. By default this features is disabled. To enabled it please set:

execution.checkpointing.unaligned.interruptible-timers.enabled

to true. Currently supported only by all TableStreamOperators and CepOperator.
  Resolution: Fixed

> More fine-grained timer processing
> --
>
> Key: FLINK-20217
> URL: https://issues.apache.org/jira/browse/FLINK-20217
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Affects Versions: 1.10.2, 1.11.2, 1.12.0
>Reporter: Nico Kruber
>Assignee: Piotr Nowojski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> Timers are currently processed in one big block under the checkpoint lock 
> (under {{InternalTimerServiceImpl#advanceWatermark}}. This can be problematic 
> in a number of scenarios while doing checkpointing which would lead to 
> checkpoints timing out (and even unaligned checkpoints would not help).
> If you have a huge number of timers to process when advancing the watermark 
> and the task is also back-pressured, the situation may actually be worse 
> since you would block on the checkpoint lock and also wait for 
> buffers/credits from the receiver.
> I propose to make this loop more fine-grained so that it is interruptible by 
> checkpoints, but maybe there is also some other way to improve here.
> This issue has been for example observed here: 
> https://lists.apache.org/thread/f6ffk9912fg5j1rfkxbzrh0qmp4w6qry



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-20217) More fine-grained timer processing

2024-06-12 Thread Piotr Nowojski (Jira)


[ https://issues.apache.org/jira/browse/FLINK-20217 ]


Piotr Nowojski deleted comment on FLINK-20217:


was (Author: pnowojski):
Firing timers can now be interrupted to speed up checkpointing. Timers that 
were interrupted by a checkpoint, will be fired shortly after checkpoint 
completes. By default this features is disabled. To enabled it please set:

execution.checkpointing.unaligned.interruptible-timers.enabled

to true.

> More fine-grained timer processing
> --
>
> Key: FLINK-20217
> URL: https://issues.apache.org/jira/browse/FLINK-20217
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Affects Versions: 1.10.2, 1.11.2, 1.12.0
>Reporter: Nico Kruber
>Assignee: Piotr Nowojski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> Timers are currently processed in one big block under the checkpoint lock 
> (under {{InternalTimerServiceImpl#advanceWatermark}}. This can be problematic 
> in a number of scenarios while doing checkpointing which would lead to 
> checkpoints timing out (and even unaligned checkpoints would not help).
> If you have a huge number of timers to process when advancing the watermark 
> and the task is also back-pressured, the situation may actually be worse 
> since you would block on the checkpoint lock and also wait for 
> buffers/credits from the receiver.
> I propose to make this loop more fine-grained so that it is interruptible by 
> checkpoints, but maybe there is also some other way to improve here.
> This issue has been for example observed here: 
> https://lists.apache.org/thread/f6ffk9912fg5j1rfkxbzrh0qmp4w6qry



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-20217) More fine-grained timer processing

2024-06-12 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reopened FLINK-20217:


> More fine-grained timer processing
> --
>
> Key: FLINK-20217
> URL: https://issues.apache.org/jira/browse/FLINK-20217
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Affects Versions: 1.10.2, 1.11.2, 1.12.0
>Reporter: Nico Kruber
>Assignee: Piotr Nowojski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> Timers are currently processed in one big block under the checkpoint lock 
> (under {{InternalTimerServiceImpl#advanceWatermark}}. This can be problematic 
> in a number of scenarios while doing checkpointing which would lead to 
> checkpoints timing out (and even unaligned checkpoints would not help).
> If you have a huge number of timers to process when advancing the watermark 
> and the task is also back-pressured, the situation may actually be worse 
> since you would block on the checkpoint lock and also wait for 
> buffers/credits from the receiver.
> I propose to make this loop more fine-grained so that it is interruptible by 
> checkpoints, but maybe there is also some other way to improve here.
> This issue has been for example observed here: 
> https://lists.apache.org/thread/f6ffk9912fg5j1rfkxbzrh0qmp4w6qry



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [hotfix] Improve execution.checkpointing.unaligned.interruptible-timers.enabled documentation [flink]

2024-06-12 Thread via GitHub


pnowojski opened a new pull request, #24926:
URL: https://github.com/apache/flink/pull/24926

   Simple documentation improvement.
   
   ## Verifying this change
   
   N/A
   
   ## Does this pull request potentially affect one of the following parts:
   
   N/A
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / N/A)
 - If yes, how is the feature documented? (not applicable / N/A / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35584) Support Java 21 in flink-docker

2024-06-12 Thread Josh England (Jira)
Josh England created FLINK-35584:


 Summary: Support Java 21 in flink-docker
 Key: FLINK-35584
 URL: https://issues.apache.org/jira/browse/FLINK-35584
 Project: Flink
  Issue Type: Improvement
  Components: flink-docker
Reporter: Josh England


Support Java 21. Base images are available for 8, 11 and 17 but since Apache 
flink now supports Java 21 (albeit in Beta) it would be good to have a base 
image for that too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-20217) More fine-grained timer processing

2024-06-12 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-20217.
--
Fix Version/s: 1.20.0
   Resolution: Fixed

Firing timers can now be interrupted to speed up checkpointing. Timers that 
were interrupted by a checkpoint, will be fired shortly after checkpoint 
completes. By default this features is disabled. To enabled it please set:

execution.checkpointing.unaligned.interruptible-timers.enabled

to true.

> More fine-grained timer processing
> --
>
> Key: FLINK-20217
> URL: https://issues.apache.org/jira/browse/FLINK-20217
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Affects Versions: 1.10.2, 1.11.2, 1.12.0
>Reporter: Nico Kruber
>Assignee: Piotr Nowojski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.20.0
>
>
> Timers are currently processed in one big block under the checkpoint lock 
> (under {{InternalTimerServiceImpl#advanceWatermark}}. This can be problematic 
> in a number of scenarios while doing checkpointing which would lead to 
> checkpoints timing out (and even unaligned checkpoints would not help).
> If you have a huge number of timers to process when advancing the watermark 
> and the task is also back-pressured, the situation may actually be worse 
> since you would block on the checkpoint lock and also wait for 
> buffers/credits from the receiver.
> I propose to make this loop more fine-grained so that it is interruptible by 
> checkpoints, but maybe there is also some other way to improve here.
> This issue has been for example observed here: 
> https://lists.apache.org/thread/f6ffk9912fg5j1rfkxbzrh0qmp4w6qry



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35528) Skip execution of interruptible mails when yielding

2024-06-12 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-35528.
--
Fix Version/s: 1.20.0
   Resolution: Fixed

merged commit 77cd1cd into apache:master now

> Skip execution of interruptible mails when yielding
> ---
>
> Key: FLINK-35528
> URL: https://issues.apache.org/jira/browse/FLINK-35528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / Task
>Affects Versions: 1.20.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When operators are yielding, for example waiting for async state access to 
> complete before a checkpoint, it would be beneficial to not execute 
> interruptible mails. Otherwise continuation mail for firing timers would be 
> continuously re-enqeueed. To achieve that MailboxExecutor must be aware which 
> mails are interruptible.
> The easiest way to achieve this is to set MIN_PRIORITY for interruptible 
> mails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35140][docs] Update Opensearch connector docs for 1.2.0 and 2.0.0 releases [flink]

2024-06-12 Thread via GitHub


snuyanzin commented on code in PR #24921:
URL: https://github.com/apache/flink/pull/24921#discussion_r1636362092


##
docs/setup_docs.sh:
##
@@ -59,7 +59,7 @@ if [ "$SKIP_INTEGRATE_CONNECTOR_DOCS" = false ]; then
   integrate_connector_docs rabbitmq v3.0
   integrate_connector_docs gcp-pubsub v3.0
   integrate_connector_docs mongodb v1.2
-  integrate_connector_docs opensearch v1.1
+  integrate_connector_docs opensearch v1.2

Review Comment:
   docs are same for both
   
   also a question: will integration docs support multiple branches per one 
connector name?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35528][task] Skip execution of interruptible mails when yielding [flink]

2024-06-12 Thread via GitHub


pnowojski merged PR #24904:
URL: https://github.com/apache/flink/pull/24904


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-20217) More fine-grained timer processing

2024-06-12 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854413#comment-17854413
 ] 

Piotr Nowojski commented on FLINK-20217:


Merged to master as 503593ab2d4..986d06d2cd7

> More fine-grained timer processing
> --
>
> Key: FLINK-20217
> URL: https://issues.apache.org/jira/browse/FLINK-20217
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Task
>Affects Versions: 1.10.2, 1.11.2, 1.12.0
>Reporter: Nico Kruber
>Assignee: Piotr Nowojski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> Timers are currently processed in one big block under the checkpoint lock 
> (under {{InternalTimerServiceImpl#advanceWatermark}}. This can be problematic 
> in a number of scenarios while doing checkpointing which would lead to 
> checkpoints timing out (and even unaligned checkpoints would not help).
> If you have a huge number of timers to process when advancing the watermark 
> and the task is also back-pressured, the situation may actually be worse 
> since you would block on the checkpoint lock and also wait for 
> buffers/credits from the receiver.
> I propose to make this loop more fine-grained so that it is interruptible by 
> checkpoints, but maybe there is also some other way to improve here.
> This issue has been for example observed here: 
> https://lists.apache.org/thread/f6ffk9912fg5j1rfkxbzrh0qmp4w6qry



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-20217][task] Allow certains operators to yield to unaligned checkpoint in case timers are firing [flink]

2024-06-12 Thread via GitHub


pnowojski merged PR #24895:
URL: https://github.com/apache/flink/pull/24895


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35140][docs] Update Opensearch connector docs for 1.2.0 and 2.0.0 releases [flink]

2024-06-12 Thread via GitHub


dannycranmer commented on code in PR #24921:
URL: https://github.com/apache/flink/pull/24921#discussion_r1636357425


##
docs/setup_docs.sh:
##
@@ -59,7 +59,7 @@ if [ "$SKIP_INTEGRATE_CONNECTOR_DOCS" = false ]; then
   integrate_connector_docs rabbitmq v3.0
   integrate_connector_docs gcp-pubsub v3.0
   integrate_connector_docs mongodb v1.2
-  integrate_connector_docs opensearch v1.1
+  integrate_connector_docs opensearch v1.2

Review Comment:
   The PR says 1.2.0 and 2.0.0 but I only see changes for 1.2.0. How about 
2.0.0?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35569) SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging failed

2024-06-12 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854409#comment-17854409
 ] 

Jane Chan commented on FLINK-35569:
---

https://github.com/apache/flink/actions/runs/9480683299/job/26122328886

> SnapshotFileMergingCompatibilityITCase#testSwitchFromEnablingToDisablingFileMerging
>  failed
> --
>
> Key: FLINK-35569
> URL: https://issues.apache.org/jira/browse/FLINK-35569
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Build System / CI
>Affects Versions: 1.20.0
>Reporter: Jane Chan
>Assignee: Zakelly Lan
>Priority: Major
>
> [https://github.com/apache/flink/actions/runs/9467135511/job/26081097181]
> The parameterized test is failed when RestoreMode is "CLAIM" and 
> fileMergingAcrossBoundary is false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35509) Slack community invite link has expired

2024-06-12 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-35509.

Resolution: Fixed

> Slack community invite link has expired
> ---
>
> Key: FLINK-35509
> URL: https://issues.apache.org/jira/browse/FLINK-35509
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Ufuk Celebi
>Assignee: Santwana Verma
>Priority: Major
>  Labels: pull-request-available
>
> The Slack invite link on the website has expired.
> I've generated a new invite link without expiration here: 
> [https://join.slack.com/t/apache-flink/shared_invite/zt-2k0fdioxx-D0kTYYLh3pPjMu5IItqx3Q]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34111][table] Add JSON_QUOTE and JSON_UNQUOTE function [flink]

2024-06-12 Thread via GitHub


anupamaggarwal commented on PR #24156:
URL: https://github.com/apache/flink/pull/24156#issuecomment-2162795228

   Hi @jeyhunkarimov thank you for contributing this PR!. Just wanted to check 
if you were planning on working further on this,  otherwise I'd be happy to 
continue and address some of the feedback. Please let me know  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35371][security] Add configuration for SSL keystore and truststore type [flink]

2024-06-12 Thread via GitHub


gaborgsomogyi commented on PR #24919:
URL: https://github.com/apache/flink/pull/24919#issuecomment-2162792331

   Unless there are further comments I'm intended to merge it beginning of next 
week.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35157][runtime] Sources with watermark alignment get stuck once some subtasks finish [flink]

2024-06-12 Thread via GitHub


elon-X commented on code in PR #24757:
URL: https://github.com/apache/flink/pull/24757#discussion_r1636303927


##
flink-runtime/src/test/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorAlignmentTest.java:
##
@@ -310,6 +310,53 @@ void testWatermarkAggregatorRandomly() {
 testWatermarkAggregatorRandomly(10, 1, true, false);
 }
 
+@Test
+void testWatermarkAlignmentWhileSubtaskFinished() throws Exception {
+long maxDrift = 1000L;
+WatermarkAlignmentParams params =
+new WatermarkAlignmentParams(maxDrift, "group1", maxDrift);
+
+final Source> 
mockSource =
+createMockSource();
+
+sourceCoordinator =
+new SourceCoordinator>(
+OPERATOR_NAME,
+mockSource,
+getNewSourceCoordinatorContext(),
+new CoordinatorStoreImpl(),
+params,
+null) {
+@Override
+void announceCombinedWatermark() {
+super.announceCombinedWatermark();
+}
+};
+
+sourceCoordinator.start();
+
+int subtask0 = 0;
+int subtask1 = 1;
+
+setReaderTaskReady(sourceCoordinator, subtask0, 0);
+setReaderTaskReady(sourceCoordinator, subtask1, 0);
+registerReader(subtask0);
+registerReader(subtask1);
+
+reportWatermarkEvent(sourceCoordinator, subtask0, 42);
+assertLatestWatermarkAlignmentEvent(subtask0, 1042);
+
+reportWatermarkEvent(sourceCoordinator, subtask1, 44);
+assertLatestWatermarkAlignmentEvent(subtask1, 1042);
+
+// mock noMoreSplits event
+assertHasNoMoreSplits(subtask0, true);

Review Comment:
   My intention is to add a test case to simulate the sending of the 
noMoreSplits event, indicating that the `SourceCoordinator` is working 
correctly. If we have `WatermarkAlignmentITCase`, perhaps we can remove 
`testWatermarkAlignmentWhileSubtaskFinished`. What do you think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34990][cdc-connector][oracle] Oracle cdc support newly add table [flink-cdc]

2024-06-12 Thread via GitHub


gong commented on code in PR #3203:
URL: https://github.com/apache/flink-cdc/pull/3203#discussion_r1636249435


##
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-oracle-cdc/src/test/java/org/apache/flink/cdc/connectors/oracle/source/NewlyAddedTableITCase.java:
##
@@ -0,0 +1,915 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.oracle.source;
+
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import 
org.apache.flink.cdc.connectors.oracle.testutils.OracleTestUtils.FailoverPhase;
+import 
org.apache.flink.cdc.connectors.oracle.testutils.OracleTestUtils.FailoverType;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.checkpoint.CheckpointException;
+import org.apache.flink.runtime.jobgraph.SavepointConfigOptions;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.TableResult;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.planner.factories.TestValuesTableFactory;
+import org.apache.flink.util.ExceptionUtils;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.junit.rules.Timeout;
+
+import java.lang.reflect.Field;
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import static java.lang.String.format;
+import static 
org.apache.flink.cdc.connectors.oracle.testutils.OracleTestUtils.getTableNameRegex;
+import static 
org.apache.flink.cdc.connectors.oracle.testutils.OracleTestUtils.triggerFailover;
+import static 
org.apache.flink.cdc.connectors.oracle.testutils.OracleTestUtils.waitForSinkSize;
+import static 
org.apache.flink.cdc.connectors.oracle.testutils.OracleTestUtils.waitForUpsertSinkSize;
+
+/** IT tests to cover various newly added tables during capture process. */
+public class NewlyAddedTableITCase extends OracleSourceTestBase {
+@Rule public final Timeout timeoutPerTest = Timeout.seconds(600);
+
+private final ScheduledExecutorService mockRedoLogExecutor =
+Executors.newScheduledThreadPool(1);
+
+@BeforeClass
+public static void beforeClass() throws SQLException {
+try (Connection dbaConnection = getJdbcConnectionAsDBA();
+Statement dbaStatement = dbaConnection.createStatement()) {
+dbaStatement.execute("ALTER DATABASE ADD SUPPLEMENTAL LOG DATA 
(ALL) COLUMNS");
+}
+}
+
+@Before
+public void before() throws Exception {
+TestValuesTableFactory.clearAllData();
+createAndInitialize("customer.sql");
+try (Connection connection = getJdbcConnection()) {
+Statement statement = connection.createStatement();
+connection.setAutoCommit(false);
+// prepare initial data for given table
+String tableId = ORACLE_SCHEMA + ".PRODUCE_LOG_TABLE";
+statement.execute(
+format(
+"CREATE TABLE %s ( ID NUMBER(19), CNT NUMBER(19), 
PRIMARY KEY(ID))",
+tableId));
+statement.execute(format("INSERT INTO  %s VALUES (0, 100)", 
tableId));
+statement.execute(format("INSERT INTO  %s VALUES (1, 101)", 
tableId));
+statement.execute(format("INSERT INTO  %s VALUES (2, 102)", 
tableId));
+connection.commit();
+
+// mock continuous redo log during the newly added table capturing 
process
+mockRedoLogExecutor.schedule(
+() -> {
+try {
+executeSql(format("UPDATE 

[jira] [Closed] (FLINK-35533) FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-12 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-35533.
--
Fix Version/s: 1.20.0
   Resolution: Done

master(1.20) via 23fa2aeb01861a6f8b85f911c5a3f989a1f75898.

> FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn
> ---
>
> Key: FLINK-35533
> URL: https://issues.apache.org/jira/browse/FLINK-35533
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.20.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> This is the jira for 
> [FLIP-459|https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35533][runtime] Support Flink hybrid shuffle integration with Apache Celeborn [flink]

2024-06-12 Thread via GitHub


reswqa merged PR #24900:
URL: https://github.com/apache/flink/pull/24900


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25537] [JUnit5 Migration] Module: flink-core with,Package: core [flink]

2024-06-12 Thread via GitHub


1996fanrui commented on code in PR #24881:
URL: https://github.com/apache/flink/pull/24881#discussion_r1636152962


##
flink-core/src/test/java/org/apache/flink/core/fs/InitOutputPathTest.java:
##
@@ -23,59 +23,52 @@
 import org.apache.flink.core.fs.local.LocalFileSystem;
 import org.apache.flink.core.testutils.CheckedThread;
 import org.apache.flink.core.testutils.OneShotLatch;
+import org.apache.flink.testutils.junit.utils.TempDirUtils;
 
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TemporaryFolder;
-import org.junit.runner.RunWith;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
-import org.powermock.core.classloader.annotations.PrepareForTest;
-import org.powermock.modules.junit4.PowerMockRunner;
+import lombok.SneakyThrows;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.io.TempDir;
 
 import java.io.File;
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.reflect.Field;
+import java.lang.reflect.Modifier;
+import java.nio.file.FileAlreadyExistsException;
 import java.util.concurrent.locks.ReentrantLock;
 
-import static org.junit.Assert.fail;
-import static org.powermock.api.mockito.PowerMockito.whenNew;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.junit.Assert.assertThrows;
 
 /** A test validating that the initialization of local output paths is 
properly synchronized. */
-@RunWith(PowerMockRunner.class)
-@PrepareForTest(LocalFileSystem.class)
-public class InitOutputPathTest {
+class InitOutputPathTest {
 
-@Rule public final TemporaryFolder tempDir = new TemporaryFolder();
+@TempDir private static java.nio.file.Path tempFolder;
 
 /**
  * This test validates that this test case makes sense - that the error 
can be produced in the
  * absence of synchronization, if the threads make progress in a certain 
way, here enforced by
  * latches.
  */
 @Test
-public void testErrorOccursUnSynchronized() throws Exception {
+void testErrorOccursUnSynchronized() throws Exception {
 // deactivate the lock to produce the original un-synchronized state
 Field lock = 
FileSystem.class.getDeclaredField("OUTPUT_DIRECTORY_INIT_LOCK");
 lock.setAccessible(true);
-lock.set(null, new NoOpLock());
 
-try {
-// in the original un-synchronized state, we can force the race to 
occur by using
-// the proper latch order to control the process of the concurrent 
threads
-runTest(true);
-fail("should fail with an exception");
-} catch (FileNotFoundException e) {
-// expected
-} finally {
-// reset the proper value
-lock.set(null, new ReentrantLock(true));
-}
+Field modifiers = Field.class.getDeclaredField("modifiers");
+modifiers.setAccessible(true);
+modifiers.setInt(lock, lock.getModifiers() & ~Modifier.FINAL);
+
+lock.set(null, new NoOpLock());
+// in the original un-synchronized state, we can force the race to 
occur by using
+// the proper latch order to control the process of the concurrent 
threads
+assertThrows(FileNotFoundException.class, () -> runTest(true));

Review Comment:
   using assertThatThrownBy instead.



##
flink-core/src/test/java/org/apache/flink/core/fs/SafetyNetCloseableRegistryTest.java:
##
@@ -20,26 +20,27 @@
 
 import org.apache.flink.core.testutils.CheckedThread;
 import org.apache.flink.util.AbstractAutoCloseableRegistry;
-import org.apache.flink.util.ExceptionUtils;
 
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TemporaryFolder;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.io.TempDir;
 
 import java.io.Closeable;
+import java.io.File;
 import java.io.IOException;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
 /** Tests for the {@link SafetyNetCloseableRegistry}. */
 public class SafetyNetCloseableRegistryTest

Review Comment:
   ```suggestion
   class SafetyNetCloseableRegistryTest
   ```



##
flink-core/src/test/java/org/apache/flink/core/fs/local/LocalFileSystemRecoverableWriterTest.java:
##
@@ -21,18 +21,18 @@
 import org.apache.flink.core.fs.AbstractRecoverableWriterTest;
 import org.apache.flink.core.fs.FileSystem;
 import org.apache.flink.core.fs.Path;
+import org.apache.flink.testutils.junit.utils.TempDirUtils;
 
-import org.junit.ClassRule;
-import org.junit.rules.TemporaryFolder;
+import org.junit.jupiter.api.io.TempDir;
 
 /** Tests for the {@link LocalRecoverableWriter}. */
 public class LocalFileSystemRecoverableWriterTest extends 
AbstractRecoverableWriterTest {

Review 

[jira] [Assigned] (FLINK-35573) [FLIP-447] Upgrade FRocksDB from 6.20.3 to 8.10.0

2024-06-12 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan reassigned FLINK-35573:
---

Assignee: Yue Ma

> [FLIP-447] Upgrade FRocksDB from 6.20.3 to 8.10.0
> -
>
> Key: FLINK-35573
> URL: https://issues.apache.org/jira/browse/FLINK-35573
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 2.0.0
>Reporter: Yue Ma
>Assignee: Yue Ma
>Priority: Major
> Fix For: 2.0.0
>
>
> The FLIP: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-447%3A+Upgrade+FRocksDB+from+6.20.3++to+8.10.0]
>  
> _RocksDBStateBackend is widely used by Flink users in large state 
> scenarios.The last upgrade of FRocksDB was in version Flink-1.14, which 
> mainly supported features such as support arm platform, deleteRange API, 
> period compaction, etc. It has been a long time since then, and RocksDB has 
> now been released to version 8.x. The main motivation for this upgrade is to 
> leverage the features of higher versions of Rocksdb to make Flink 
> RocksDBStateBackend more powerful. While RocksDB is also continuously 
> optimizing and bug fixing, we hope to keep FRocksDB more or less in sync with 
> RocksDB and upgrade it periodically._



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34123) Introduce built-in serialization support for Map and List

2024-06-12 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854356#comment-17854356
 ] 

Weijie Guo commented on FLINK-34123:


Reverted in 894feed5f3eb4fb38c53001a7cc8727d7fe46bb5.

> Introduce built-in serialization support for Map and List
> -
>
> Key: FLINK-34123
> URL: https://issues.apache.org/jira/browse/FLINK-34123
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Affects Versions: 1.20.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce built-in serialization support for Map and List, two common 
> collection types for which Flink already have custom serializers implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >