Re: [PR] [FLINK-33853][runtime][JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module [flink]

2023-12-14 Thread via GitHub


flinkbot commented on PR #23932:
URL: https://github.com/apache/flink/pull/23932#issuecomment-1857412172

   
   ## CI report:
   
   * bd328df2d2f1dea0cbe121e16a83734451e19306 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27876][table-planner] Choose the right side as build side when using default shuffle hash strategy if left size is equal with right [flink]

2023-12-14 Thread via GitHub


xuyangzhong commented on PR #19866:
URL: https://github.com/apache/flink/pull/19866#issuecomment-1857405822

   Hi, @lsyldliu. I rebased the master. Could you have a look again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33853) [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module

2023-12-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33853:
---
Labels: pull-request-available  (was: )

> [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes 
> of runtime module
> --
>
> Key: FLINK-33853
> URL: https://issues.apache.org/jira/browse/FLINK-33853
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33853][runtime][JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module [flink]

2023-12-14 Thread via GitHub


RocMarshal opened a new pull request, #23932:
URL: https://github.com/apache/flink/pull/23932

   
   
   
   ## What is the purpose of the change
   
   [FLINK-33853][runtime][JUnit5 Migration] Migrate Junit5 for 
DeclarativeSlotPoolBridge test classes of runtime module
   
   
   ## Brief change log
   
   - DeclarativeSlotPoolBridgeRequestCompletionTest
   - DeclarativeSlotPoolBridgeResourceDeclarationTest
   - DeclarativeSlotPoolBridgePreferredAllocationsTest
   - DeclarativeSlotPoolBridgeTest
   
   ## Verifying this change
   
   N.A
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33853) [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes of runtime module

2023-12-14 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-33853:
---
Summary: [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge 
test classes of runtime module  (was: [JUnit5 Migration] Migrate Junit5 for 
DefaultDeclarativeSlotPool test classes of runtime module)

> [JUnit5 Migration] Migrate Junit5 for DeclarativeSlotPoolBridge test classes 
> of runtime module
> --
>
> Key: FLINK-33853
> URL: https://issues.apache.org/jira/browse/FLINK-33853
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: RocMarshal
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33853) Migrate Junit5 for DefaultDeclarativeSlotPool test classes

2023-12-14 Thread RocMarshal (Jira)
RocMarshal created FLINK-33853:
--

 Summary: Migrate Junit5 for DefaultDeclarativeSlotPool test classes
 Key: FLINK-33853
 URL: https://issues.apache.org/jira/browse/FLINK-33853
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: RocMarshal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33853) [JUnit5 Migration] Migrate Junit5 for DefaultDeclarativeSlotPool test classes of runtime module

2023-12-14 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-33853:
---
Summary: [JUnit5 Migration] Migrate Junit5 for DefaultDeclarativeSlotPool 
test classes of runtime module  (was: Migrate Junit5 for 
DefaultDeclarativeSlotPool test classes)

> [JUnit5 Migration] Migrate Junit5 for DefaultDeclarativeSlotPool test classes 
> of runtime module
> ---
>
> Key: FLINK-33853
> URL: https://issues.apache.org/jira/browse/FLINK-33853
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: RocMarshal
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33450) Implement JdbcAutoScalerStateStore

2023-12-14 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-33450:

Description: Design doc: 
https://docs.google.com/document/d/1lE4s3ZAyCfzYT4dNOVarz5dIctQ8UUAVqjPKlQ1XU1c/edit?usp=sharing

> Implement JdbcAutoScalerStateStore
> --
>
> Key: FLINK-33450
> URL: https://issues.apache.org/jira/browse/FLINK-33450
> Project: Flink
>  Issue Type: Sub-task
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>
> Design doc: 
> https://docs.google.com/document/d/1lE4s3ZAyCfzYT4dNOVarz5dIctQ8UUAVqjPKlQ1XU1c/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-12-14 Thread via GitHub


1996fanrui commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1427526869


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/DefaultDeclarativeSlotPool.java:
##
@@ -103,12 +106,35 @@ public class DefaultDeclarativeSlotPool implements 
DeclarativeSlotPool {
 
 private final RequirementMatcher requirementMatcher = new 
DefaultRequirementMatcher();
 
+// For batch slots requests
+@Nullable private final Time slotRequestMaxInterval;
+@Nullable private final ComponentMainThreadExecutor 
componentMainThreadExecutor;

Review Comment:
   If we introduce `ComponentMainThreadExecutor`, I prefer it's `@Nonnull` even 
if `slotRequestMaxInterval` is null.
   
   This thread executor may be used for other scenarios, it would be better not 
bound to`slotRequestMaxInterval`.



##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/DefaultDeclarativeSlotPool.java:
##
@@ -103,12 +106,35 @@ public class DefaultDeclarativeSlotPool implements 
DeclarativeSlotPool {
 
 private final RequirementMatcher requirementMatcher = new 
DefaultRequirementMatcher();
 
+// For batch slots requests
+@Nullable private final Time slotRequestMaxInterval;
+@Nullable private final ComponentMainThreadExecutor 
componentMainThreadExecutor;
+@Nullable private ScheduledFuture slotRequestMaxIntervalTimeoutFuture;
+
 public DefaultDeclarativeSlotPool(
 JobID jobId,
 AllocatedSlotPool slotPool,
 Consumer> 
notifyNewResourceRequirements,
 Time idleSlotTimeout,
 Time rpcTimeout) {
+this(
+jobId,
+slotPool,
+notifyNewResourceRequirements,
+idleSlotTimeout,
+rpcTimeout,
+null,
+null);
+}
+
+public DefaultDeclarativeSlotPool(
+JobID jobId,
+AllocatedSlotPool slotPool,
+Consumer> 
notifyNewResourceRequirements,
+Time idleSlotTimeout,
+Time rpcTimeout,
+@Nullable Time slotRequestMaxInterval,
+@Nullable ComponentMainThreadExecutor componentMainThreadExecutor) 
{

Review Comment:
   They have been marked to `@Nullable`, do we still need 2 constructors?
   
   May be one constructor with full parameters is enough.
   
   Note: `BlocklistDeclarativeSlotPool` is same.
   
   
   Also, this commit is a part of FLINK-33388, right? Why doesn't it introduce 
`slotRequestMaxInterval` in this commit? I didn't see any caller pass 
`slotRequestMaxInterval` in this PR.
   
   `commit message` is better to adding the corresponding JIRA id.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-32850) [JUnit5 Migration] The io package of flink-runtime module

2023-12-14 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17779841#comment-17779841
 ] 

Leonard Xu edited comment on FLINK-32850 at 12/15/23 6:58 AM:
--

Merged master (1.19) via:

0ccd95ef48bcd7246f8c88c9aa7b69ffa268c865
1c884ab48372f7a66f86c28aeaf9518000c7f357


was (Author: fanrui):
Merged master (1.19) via:

0ccd95ef48bcd7246f8c88c9aa7b69ffa268c865

> [JUnit5 Migration] The io package of flink-runtime module
> -
>
> Key: FLINK-32850
> URL: https://issues.apache.org/jira/browse/FLINK-32850
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32850][flink-runtime][JUnit5 Migration] The io.network.netty package of flink-runtime module [flink]

2023-12-14 Thread via GitHub


leonardBang merged PR #23607:
URL: https://github.com/apache/flink/pull/23607


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2023-12-14 Thread via GitHub


1996fanrui commented on code in PR #23922:
URL: https://github.com/apache/flink/pull/23922#discussion_r1427612626


##
flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java:
##
@@ -359,6 +361,50 @@ public void testDbPathRelativePaths() throws Exception {
 rocksDbBackend.setDbStoragePath("relative/path");
 }
 
+@Test
+public void testCleanRelocatedDbLogs() throws Exception {
+final File folder = tempFolder.newFolder();
+final File relocatedDBLogDir = tempFolder.newFolder("db_logs");
+final File logFile = new File(relocatedDBLogDir, "taskManager.log");
+Files.createFile(logFile.toPath());
+System.setProperty("log.file", logFile.getAbsolutePath());
+
+final String dbStoragePath = new 
Path(folder.toURI().toString()).toString();
+final EmbeddedRocksDBStateBackend rocksDbBackend = new 
EmbeddedRocksDBStateBackend();
+rocksDbBackend.setDbStoragePath(dbStoragePath);
+
+final MockEnvironment env = getMockEnvironment(tempFolder.newFolder());
+RocksDBKeyedStateBackend keyedBackend =
+createKeyedStateBackend(rocksDbBackend, env, 
IntSerializer.INSTANCE);
+// clear unused file
+FileUtils.deleteFileOrDirectory(logFile);
+
+File instanceBasePath = keyedBackend.getInstanceBasePath();
+File instanceRocksDBPath =
+
RocksDBKeyedStateBackendBuilder.getInstanceRocksDBPath(instanceBasePath);
+
+// avoid tests without relocate.
+Assume.assumeTrue(instanceRocksDBPath.getAbsolutePath().length() <= 
255 - "_LOG".length());
+
+String relocatedDbLogPrefix =
+RocksDBResourceContainer.resolveRelocatedDbLogPrefix(
+instanceRocksDBPath.getAbsolutePath());
+java.nio.file.Path[] relocatedDbLogs;
+try {
+relocatedDbLogs = 
FileUtils.listDirectory(relocatedDBLogDir.toPath());
+
assertTrue(relocatedDbLogs[0].getFileName().startsWith(relocatedDbLogPrefix));
+// add a rolled log file
+Files.createTempFile(relocatedDBLogDir.toPath(), 
relocatedDbLogPrefix, ".suffix");

Review Comment:
   Could we let rocksdb create LOG file and calling the close to clean it?
   
   If so, we don't need to call the `resolveRelocatedDbLogPrefix`. Current test 
just tests create an file based on `resolveRelocatedDbLogPrefix`, and clean it 
based on `resolveRelocatedDbLogPrefix`.
   
   If `resolveRelocatedDbLogPrefix` has some bugs or its rule isn't same with 
rocksdb side. This test cannot cover it.
   
   Also, if we let rocksdb create LOG file, current test will fail if rocksdb 
log file rule is changed in the future.
   
   WDYT?



##
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBResourceContainer.java:
##
@@ -438,4 +451,44 @@ private File resolveFileLocation(String logFilePath) {
 File logFile = new File(logFilePath);
 return (logFile.exists() && logFile.canRead()) ? logFile : null;
 }
+
+/** Clean all relocated rocksdb logs. */
+private void cleanRelocatedDbLogs() {
+if (instanceRocksDBPath != null && relocatedDbLogBaseDir != null) {
+LOG.info("Cleaning up relocated RocksDB logs: {}.", 
relocatedDbLogBaseDir);
+
+String relocatedDbLogPrefix =
+
resolveRelocatedDbLogPrefix(instanceRocksDBPath.getAbsolutePath());
+try {
+Arrays.stream(FileUtils.listDirectory(relocatedDbLogBaseDir))
+.filter(
+path ->
+!Files.isDirectory(path)
+&& path.toFile()
+.getName()
+
.startsWith(relocatedDbLogPrefix))
+.forEach(IOUtils::deleteFileQuietly);
+} catch (IOException e) {
+LOG.warn(
+"Could not list relocated RocksDB log directory: {}",
+relocatedDbLogBaseDir);
+}
+}
+}
+
+/**
+ * Resolve the prefix of rocksdb's log file name according to rocksdb's 
log file name rules. See
+ * 
https://github.com/ververica/frocksdb/blob/FRocksDB-6.20.3/file/filename.cc#L30.
+ *
+ * @param instanceRocksDBAbsolutePath The path where the rocksdb directory 
is located.
+ * @return Resolved rocksdb log name prefix.
+ */
+public static String resolveRelocatedDbLogPrefix(String 
instanceRocksDBAbsolutePath) {

Review Comment:
   Based on the last comment, I prefer this method is a private method.
   
   Also, if it's public method in the end, please add `@VisibleForTesting` 
annotation.



-- 
This is an automated message 

Re: [PR] [FLINK-32850][flink-runtime][JUnit5 Migration] The io.network.api package of flink-runtime module [flink]

2023-12-14 Thread via GitHub


leonardBang commented on PR #23590:
URL: https://github.com/apache/flink/pull/23590#issuecomment-1857368418

   @dawidwys Would you like to help review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32850][flink-runtime][JUnit5 Migration] The io.network.buffer package of flink-runtime module [flink]

2023-12-14 Thread via GitHub


leonardBang commented on PR #23604:
URL: https://github.com/apache/flink/pull/23604#issuecomment-1857368007

   @reswqa Could you help review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [ FLINK-33603][FileSystems] shade guava in gs-fs filesystem [flink]

2023-12-14 Thread via GitHub


singhravidutt commented on PR #23489:
URL: https://github.com/apache/flink/pull/23489#issuecomment-1857360296

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33588) Fix Flink Checkpointing Statistics Bug

2023-12-14 Thread Tongtong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17797041#comment-17797041
 ] 

Tongtong Zhu commented on FLINK-33588:
--

[~jingge] CI report is no problem is Success

!image-2023-12-15-13-59-28-201.png!

> Fix Flink Checkpointing Statistics Bug
> --
>
> Key: FLINK-33588
> URL: https://issues.apache.org/jira/browse/FLINK-33588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.5, 1.16.0, 1.17.0, 1.15.2, 1.14.6, 1.18.0, 1.17.1
>Reporter: Tongtong Zhu
>Assignee: Tongtong Zhu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
> Attachments: FLINK-33588.patch, image-2023-12-11-17-35-23-391.png, 
> image-2023-12-13-11-35-43-780.png, image-2023-12-15-13-59-28-201.png, 
> newCommit-FLINK-33688.patch
>
>
> When the Flink task is first started, the checkpoint data is null due to the 
> lack of data, and Percentile throws a null pointer exception when calculating 
> the percentage. After multiple tests, I found that it is necessary to set an 
> initial value for the statistical data value of the checkpoint when the 
> checkpoint data is null (i.e. at the beginning of the task) to solve this 
> problem.
> The following is an abnormal description of the bug:
> 2023-09-13 15:02:54,608 ERROR 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
>  [] - Unhandled exception.
> org.apache.commons.math3.exception.NullArgumentException: input array
>     at 
> org.apache.commons.math3.util.MathArrays.verifyValues(MathArrays.java:1650) 
> ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.AbstractUnivariateStatistic.test(AbstractUnivariateStatistic.java:158)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:272)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:241)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.getPercentile(DescriptiveStatisticsHistogramStatistics.java:159)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics.getQuantile(DescriptiveStatisticsHistogramStatistics.java:53)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.checkpoint.StatsSummarySnapshot.getQuantile(StatsSummarySnapshot.java:108)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.messages.checkpoints.StatsSummaryDto.valueOf(StatsSummaryDto.java:81)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.createCheckpointingStatistics(CheckpointingStatisticsHandler.java:129)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.handleRequest(CheckpointingStatisticsHandler.java:84)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.handleRequest(CheckpointingStatisticsHandler.java:58)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.AbstractAccessExecutionGraphHandler.handleRequest(AbstractAccessExecutionGraphHandler.java:68)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.lambda$handleRequest$0(AbstractExecutionGraphHandler.java:87)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
> [?:1.8.0_151]
>     at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_151]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  

[jira] [Updated] (FLINK-33588) Fix Flink Checkpointing Statistics Bug

2023-12-14 Thread Tongtong Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tongtong Zhu updated FLINK-33588:
-
Attachment: image-2023-12-15-13-59-28-201.png

> Fix Flink Checkpointing Statistics Bug
> --
>
> Key: FLINK-33588
> URL: https://issues.apache.org/jira/browse/FLINK-33588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.5, 1.16.0, 1.17.0, 1.15.2, 1.14.6, 1.18.0, 1.17.1
>Reporter: Tongtong Zhu
>Assignee: Tongtong Zhu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
> Attachments: FLINK-33588.patch, image-2023-12-11-17-35-23-391.png, 
> image-2023-12-13-11-35-43-780.png, image-2023-12-15-13-59-28-201.png, 
> newCommit-FLINK-33688.patch
>
>
> When the Flink task is first started, the checkpoint data is null due to the 
> lack of data, and Percentile throws a null pointer exception when calculating 
> the percentage. After multiple tests, I found that it is necessary to set an 
> initial value for the statistical data value of the checkpoint when the 
> checkpoint data is null (i.e. at the beginning of the task) to solve this 
> problem.
> The following is an abnormal description of the bug:
> 2023-09-13 15:02:54,608 ERROR 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
>  [] - Unhandled exception.
> org.apache.commons.math3.exception.NullArgumentException: input array
>     at 
> org.apache.commons.math3.util.MathArrays.verifyValues(MathArrays.java:1650) 
> ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.AbstractUnivariateStatistic.test(AbstractUnivariateStatistic.java:158)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:272)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:241)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.getPercentile(DescriptiveStatisticsHistogramStatistics.java:159)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics.getQuantile(DescriptiveStatisticsHistogramStatistics.java:53)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.checkpoint.StatsSummarySnapshot.getQuantile(StatsSummarySnapshot.java:108)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.messages.checkpoints.StatsSummaryDto.valueOf(StatsSummaryDto.java:81)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.createCheckpointingStatistics(CheckpointingStatisticsHandler.java:129)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.handleRequest(CheckpointingStatisticsHandler.java:84)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.handleRequest(CheckpointingStatisticsHandler.java:58)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.AbstractAccessExecutionGraphHandler.handleRequest(AbstractAccessExecutionGraphHandler.java:68)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.lambda$handleRequest$0(AbstractExecutionGraphHandler.java:87)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
> [?:1.8.0_151]
>     at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_151]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_151]
>     at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]




Re: [PR] [FLINK-33795] Add a new config to forbid autoscale execution in the configured excluded periods [flink-kubernetes-operator]

2023-12-14 Thread via GitHub


flashJd commented on code in PR #728:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/728#discussion_r1427581625


##
flink-autoscaler/pom.xml:
##
@@ -66,6 +66,12 @@ under the License.
 jackson-databind
 
 
+
+org.quartz-scheduler
+quartz
+2.3.2

Review Comment:
   sure, already extracted



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33509] Fix flaky test testNodeAffinity() in InitTaskManagerDecoratorTest.java [flink]

2023-12-14 Thread via GitHub


wanglijie95 commented on code in PR #23694:
URL: https://github.com/apache/flink/pull/23694#discussion_r1427580681


##
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/decorators/InitTaskManagerDecoratorTest.java:
##
@@ -282,12 +282,22 @@ void testNodeAffinity() {
 assertThat(nodeSelectorTerms.size()).isEqualTo(1);
 
 List requirements = 
nodeSelectorTerms.get(0).getMatchExpressions();
-assertThat(requirements)
-.containsExactlyInAnyOrder(
+for (int i = 0; i < requirements.size(); i++) {

Review Comment:
   Hi @yijut2, I think we simply change the logic as follows:
   ```
   assertThat(requirements).hasSize(1);
   NodeSelectorRequirement requirement = requirements.get(0);
   assertThat(requirement.getKey())
   .isEqualTo(
   
flinkConfig.getString(KubernetesConfigOptions.KUBERNETES_NODE_NAME_LABEL));
   assertThat(requirement.getOperator()).isEqualTo("NotIn");
   assertThat(requirement.getValues())
   
.containsExactlyInAnyOrderElementsOf(Arrays.asList("blockedNode2", 
"blockedNode1"));
   ```
   
   WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27876][table-planner] Choose the right side as build side when using default shuffle hash strategy if left size is equal with right [flink]

2023-12-14 Thread via GitHub


xuyangzhong commented on code in PR #19866:
URL: https://github.com/apache/flink/pull/19866#discussion_r1427570136


##
flink-table/flink-table-planner/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/MultipleInputCreationTest.xml:
##
@@ -109,39 +109,6 @@ MultipleInput(readOrder=[0,1,0], 
members=[\nNestedLoopJoin(joinType=[LeftOuterJo
 :  +- LegacyTableSourceScan(table=[[default_catalog, default_database, x, 
source: [TestTableSource(a, b, c, nx)]]], fields=[a, b, c, nx])
 +- Exchange(distribution=[broadcast])
+- LegacyTableSourceScan(table=[[default_catalog, default_database, y, 
source: [TestTableSource(d, e, f, ny)]]], fields=[d, e, f, ny])
-]]>
-
-  
-  

Review Comment:
   Actually this test is not deleted. I just delete this xml, let Flink to 
generate it again, and the order about tests in this file has been changed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33541][table-planner] function RAND and RAND_INTEGER should return type nullable if the arguments are nullable [flink]

2023-12-14 Thread via GitHub


xuyangzhong commented on PR #23779:
URL: https://github.com/apache/flink/pull/23779#issuecomment-1857309968

   Hi, @libenchao sorry for this noise. I have updated this pr to test 
non-deterministic results about RAND and RAND_INTEGER. Could you have a look 
again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33852) FLIP-403 High Availability Services for OLAP Scenarios

2023-12-14 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-33852:
--

 Summary: FLIP-403 High Availability Services for OLAP Scenarios
 Key: FLINK-33852
 URL: https://issues.apache.org/jira/browse/FLINK-33852
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Coordination
Reporter: Yangze Guo
Assignee: Yangze Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32895) Introduce the max attempts for Exponential Delay Restart Strategy

2023-12-14 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17797017#comment-17797017
 ] 

Rui Fan commented on FLINK-32895:
-

Merged to master(1.19) via :

1c7b10873be475b73d78a083eda6be71fbb13c2b

80e71a47662c70e5cc0d96bfa3962bd37a6d020d

3d4d396e68e6cf7f49b7cc8d94b2c9516ffc2b96

> Introduce the max attempts for Exponential Delay Restart Strategy
> -
>
> Key: FLINK-32895
> URL: https://issues.apache.org/jira/browse/FLINK-32895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
>
> Currently, Flink has 3 restart strategies, they are: fixed-delay, 
> failure-rate and exponential-delay.
> The exponential-delay is suitable if a job continues to fail for a period of 
> time. The fixed-delay and failure-rate has the max attempts mechanism, that 
> means, the job won't restart and go to fail after the attempt exceeds the 
> threshold of max attempts. 
> The max attempts mechanism is reasonable, flink should not or need to 
> infinitely restart the job if the job keeps failing. However, the 
> exponential-delay doesn't have the max attempts mechanism.
> I propose introducing the 
> `restart-strategy.exponential-delay.max-attempts-before-reset` to support the 
> max attempts mechanism for exponential-delay. It means flink won't restart 
> job if the number of job failures before reset exceeds 
> max-attempts-before-reset when is exponential-delay is enabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32895) Introduce the max attempts for Exponential Delay Restart Strategy

2023-12-14 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-32895.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Introduce the max attempts for Exponential Delay Restart Strategy
> -
>
> Key: FLINK-32895
> URL: https://issues.apache.org/jira/browse/FLINK-32895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently, Flink has 3 restart strategies, they are: fixed-delay, 
> failure-rate and exponential-delay.
> The exponential-delay is suitable if a job continues to fail for a period of 
> time. The fixed-delay and failure-rate has the max attempts mechanism, that 
> means, the job won't restart and go to fail after the attempt exceeds the 
> threshold of max attempts. 
> The max attempts mechanism is reasonable, flink should not or need to 
> infinitely restart the job if the job keeps failing. However, the 
> exponential-delay doesn't have the max attempts mechanism.
> I propose introducing the 
> `restart-strategy.exponential-delay.max-attempts-before-reset` to support the 
> max attempts mechanism for exponential-delay. It means flink won't restart 
> job if the number of job failures before reset exceeds 
> max-attempts-before-reset when is exponential-delay is enabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-20772][State] Make TtlValueState#update follow the description of interface about null values [flink]

2023-12-14 Thread via GitHub


Zakelly commented on PR #23928:
URL: https://github.com/apache/flink/pull/23928#issuecomment-1857273442

   @masteryhx Thanks so much for your review! I have update my PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32895][Scheduler] Introduce the max attempts for Exponential Delay Restart Strategy [flink]

2023-12-14 Thread via GitHub


1996fanrui merged PR #23247:
URL: https://github.com/apache/flink/pull/23247


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32895][Scheduler] Introduce the max attempts for Exponential Delay Restart Strategy [flink]

2023-12-14 Thread via GitHub


1996fanrui commented on PR #23247:
URL: https://github.com/apache/flink/pull/23247#issuecomment-1857272908

   Thanks everyone for the review, merging~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-20772][State] Make TtlValueState#update follow the description of interface about null values [flink]

2023-12-14 Thread via GitHub


Zakelly commented on code in PR #23928:
URL: https://github.com/apache/flink/pull/23928#discussion_r1427540869


##
flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlValueStateTestContext.java:
##
@@ -21,14 +21,14 @@
 import org.apache.flink.api.common.state.State;
 import org.apache.flink.api.common.state.StateDescriptor;
 import org.apache.flink.api.common.state.ValueStateDescriptor;
-import org.apache.flink.api.common.typeutils.base.StringSerializer;
+import org.apache.flink.api.common.typeutils.base.LongSerializer;
 
 /** Test suite for {@link TtlValueState}. */
 class TtlValueStateTestContext

Review Comment:
   Yes, we need modify this test, since the StringSerializer supports to 
serialize null value, which hides the original problem.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-20772][State] Make TtlValueState#update follow the description of interface about null values [flink]

2023-12-14 Thread via GitHub


masteryhx commented on code in PR #23928:
URL: https://github.com/apache/flink/pull/23928#discussion_r1427533090


##
flink-runtime/src/main/java/org/apache/flink/runtime/state/ttl/TtlValueState.java:
##
@@ -48,7 +48,11 @@ public T value() throws IOException {
 @Override
 public void update(T value) throws IOException {
 accessCallback.run();
-original.update(wrapWithTs(value));
+if (value == null) {

Review Comment:
   How about just make it like: `original.update(value == null ? null : 
wrapWithTs(value));` ?
   Just let the original one to process the logic of updating null which should 
also follow the contract.



##
flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlValueStateTestContext.java:
##
@@ -21,14 +21,14 @@
 import org.apache.flink.api.common.state.State;
 import org.apache.flink.api.common.state.StateDescriptor;
 import org.apache.flink.api.common.state.ValueStateDescriptor;
-import org.apache.flink.api.common.typeutils.base.StringSerializer;
+import org.apache.flink.api.common.typeutils.base.LongSerializer;
 
 /** Test suite for {@link TtlValueState}. */
 class TtlValueStateTestContext

Review Comment:
   Maybe I missed something.
   Why do we have to modify this class ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] [tests] Per-class the lifecycle of mini-cluster in `ClientHeartbeatTest`. [flink]

2023-12-14 Thread via GitHub


reswqa commented on PR #23910:
URL: https://github.com/apache/flink/pull/23910#issuecomment-1857249121

   To be honest, I prefer to leave it as it is. Since I don't see much benefit, 
this `1s` speed up doesn't necessarily show up in the CI environment, but it 
does increase the instability of the test class (especially this class shows 
the characteristics of flaky test to some extent ).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33819) Support setting CompressType in RocksDBStateBackend

2023-12-14 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796992#comment-17796992
 ] 

Hangxiang Yu commented on FLINK-33819:
--

Linked FLINK-20684 as it has been discussed before.

Linked FLINK-11313 which talks about the LZ4 Compression which should be more 
usable than Snappy.

IMO, it should improve performane if we disbale the compression of L0/L1 in 
some scenes.

[~mayuehappy] Do you have some test results on it ?

BTW, If we'd like to introduce such a option, it's  better to guarantee the 
compalibility.

> Support setting CompressType in RocksDBStateBackend
> ---
>
> Key: FLINK-33819
> URL: https://issues.apache.org/jira/browse/FLINK-33819
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Yue Ma
>Priority: Major
> Fix For: 1.19.0
>
> Attachments: image-2023-12-14-11-32-32-968.png, 
> image-2023-12-14-11-35-22-306.png
>
>
> Currently, RocksDBStateBackend does not support setting the compression 
> level, and Snappy is used for compression by default. But we have some 
> scenarios where compression will use a lot of CPU resources. Turning off 
> compression can significantly reduce CPU overhead. So we may need to support 
> a parameter for users to set the CompressType of Rocksdb.
>   !image-2023-12-14-11-35-22-306.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Run push_pr.yml against multiple Flink versions [flink-connector-gcp-pubsub]

2023-12-14 Thread via GitHub


boring-cyborg[bot] commented on PR #21:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/21#issuecomment-1857232953

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33588][Flink-Runtime] Fix NullArgumentException of getQuantile method in DescriptiveStatisticsHistogramStatistics [flink]

2023-12-14 Thread via GitHub


zhutong6688 commented on PR #23931:
URL: https://github.com/apache/flink/pull/23931#issuecomment-1857225538

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-12-14 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1427515808


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/DefaultDeclarativeSlotPool.java:
##
@@ -127,7 +155,28 @@ public void increaseResourceRequirementsBy(ResourceCounter 
increment) {
 }
 totalResourceRequirements = totalResourceRequirements.add(increment);
 
+if (slotRequestMaxInterval == null) {
+declareResourceRequirements();
+return;
+}
+
+Preconditions.checkNotNull(componentMainThreadExecutor);
+if (slotRequestMaxIntervalTimeoutCheckFuture != null) {
+slotRequestMaxIntervalTimeoutCheckFuture.cancel(true);
+}
+slotRequestMaxIntervalTimeoutCheckFuture =
+componentMainThreadExecutor.schedule(
+this::checkSlotRequestMaxIntervalTimeout,
+slotRequestMaxInterval.toMilliseconds(),
+TimeUnit.MILLISECONDS);
+}
+
+private void checkSlotRequestMaxIntervalTimeout() {
+if (componentMainThreadExecutor == null || slotRequestMaxInterval == 
null) {

Review Comment:
   `Precondition.checkNotNull` on two variables would be good enough from my 
side.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-12-14 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1427515551


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/DefaultDeclarativeSlotPool.java:
##
@@ -127,7 +155,28 @@ public void increaseResourceRequirementsBy(ResourceCounter 
increment) {
 }
 totalResourceRequirements = totalResourceRequirements.add(increment);
 
+if (slotRequestMaxInterval == null) {
+declareResourceRequirements();
+return;
+}
+
+Preconditions.checkNotNull(componentMainThreadExecutor);
+if (slotRequestMaxIntervalTimeoutCheckFuture != null) {
+slotRequestMaxIntervalTimeoutCheckFuture.cancel(true);

Review Comment:
   In current approach, we will minimize the rpc calls. The other approach may 
benefit the e2e latency.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32949][core]collect tm port binding with TaskManagerOptions [flink]

2023-12-14 Thread via GitHub


xintongsong commented on code in PR #23870:
URL: https://github.com/apache/flink/pull/23870#discussion_r1427504078


##
flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java:
##
@@ -173,6 +173,18 @@ public class TaskManagerOptions {
 + RPC_PORT.key()
 + "') will be used.");
 
+/** The default port that CollectSinkFunction$ServerThread is 
using. */
+@Documentation.Section({
+Documentation.Sections.COMMON_HOST_PORT,
+Documentation.Sections.ALL_TASK_MANAGER
+})
+public static final ConfigOption COLLECT_PORT =
+key("taskmanager.collect.port")

Review Comment:
   I'd suggest the key `taskmanager.collect-sink.port`.



##
flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java:
##
@@ -173,6 +173,18 @@ public class TaskManagerOptions {
 + RPC_PORT.key()
 + "') will be used.");
 
+/** The default port that CollectSinkFunction$ServerThread is 
using. */
+@Documentation.Section({
+Documentation.Sections.COMMON_HOST_PORT,
+Documentation.Sections.ALL_TASK_MANAGER
+})
+public static final ConfigOption COLLECT_PORT =
+key("taskmanager.collect.port")
+.intType()
+.defaultValue(0)
+.withDescription(
+"The local port that the 
CollectSinkFunction$ServerThread binds to. The default is 0, which corresponds 
to a random port assignment.");

Review Comment:
   I think the current description is kind of developer oriented, rather than 
user oriented. `CollectSinkFunction` is an `@Internal` class. Ideally, this 
should only be used for Flink's client, and users should not be aware of it. 
From a user's perspective, I think they only need to know that this port is for 
the client to retrieve query results from the TaskManager.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Use flink SNAPSHOT version in weekly.yml to check changes quickly [flink-connector-kafka]

2023-12-14 Thread via GitHub


ruanhang1993 opened a new pull request, #75:
URL: https://github.com/apache/flink-connector-kafka/pull/75

   I find that some connectors(e.g. ES, mongodb, JDBC) are using the SNAPSHOT 
flink versions for main branch in `weekly.yml`. I think this will help to check 
changes in these version quickly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-12-14 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1427513961


##
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/DeclarativeSlotPoolBridge.java:
##
@@ -200,10 +226,45 @@ protected void onReleaseTaskManager(ResourceCounter 
previouslyFulfilledRequireme
 
 @VisibleForTesting
 void newSlotsAreAvailable(Collection newSlots) {
+if (!globalSlotsViewable) {
+final Collection 
requestSlotMatches =
+requestSlotMatchingStrategy.matchRequestsAndSlots(
+newSlots, pendingRequests.values());
+reserveMatchedFreeSlots(requestSlotMatches);
+fulfillMatchedSlots(requestSlotMatches);
+return;
+}
+
+receivedSlots.addAll(newSlots);
+if (receivedSlots.size() < pendingRequests.size()) {
+return;
+}
 final Collection 
requestSlotMatches =
 requestSlotMatchingStrategy.matchRequestsAndSlots(
 newSlots, pendingRequests.values());
+if (requestSlotMatches.size() == pendingRequests.size()) {
+reserveMatchedFreeSlots(requestSlotMatches);
+fulfillMatchedSlots(requestSlotMatches);
+}
+receivedSlots.clear();

Review Comment:
   The `receivedSlots` will be cleared no matter whether we got enough matching 
slots.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Test connector against Flink 1.16.2, 1.17.1, 1.18.0 instead of Flink SNAPSHOT version [flink-connector-mongodb]

2023-12-14 Thread via GitHub


ruanhang1993 commented on PR #20:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/20#issuecomment-1857208606

   Hi, @Jiabao-Sun .
   I think using all SNAPSHOT versions could help us to check the changes in 
Flink. Let's keep the settings now. Close this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Test connector against Flink 1.16.2, 1.17.1, 1.18.0 instead of Flink SNAPSHOT version [flink-connector-mongodb]

2023-12-14 Thread via GitHub


ruanhang1993 closed pull request #20: [hotfix] Test connector against Flink 
1.16.2, 1.17.1, 1.18.0 instead of Flink SNAPSHOT version
URL: https://github.com/apache/flink-connector-mongodb/pull/20


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Add missing word [flink-web]

2023-12-14 Thread via GitHub


tombentley opened a new pull request, #705:
URL: https://github.com/apache/flink-web/pull/705

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32573) Translate "Custom Serialization for Managed State" page into Chinese

2023-12-14 Thread Jinsui Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796980#comment-17796980
 ] 

Jinsui Chen commented on FLINK-32573:
-

hi, [~Wencong Liu] . Would you please assign this to me?

> Translate "Custom Serialization for Managed State" page into Chinese
> 
>
> Key: FLINK-32573
> URL: https://issues.apache.org/jira/browse/FLINK-32573
> Project: Flink
>  Issue Type: New Feature
>  Components: chinese-translation, Documentation
>Affects Versions: 1.18.0
>Reporter: Yongping Li
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> The page url is 
> https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/custom_serialization/
> The markdown file is located in 
> docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/custom_serialization.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-30613] Improve resolving schema compatibility -- Milestone one [flink]

2023-12-14 Thread via GitHub


masteryhx commented on PR #21635:
URL: https://github.com/apache/flink/pull/21635#issuecomment-1857189693

   > Hi @masteryhx , sorry for the late reply!
   > 
   > After reading your PR, I finally figured out why the default 
implementation of old method and new method should call each other. I think we 
could do something tricky to check whether one of the old and new methods has 
been implemented in specific `TypeSerializerSnapshot` before it is called. 
https://stackoverflow.com/a/2315467 this may provide some ideas. WDYT?
   
   Thanks for the advice.
   It's a good idea to avoid infinite loop when users don't implement both 
methods.
   I will update the pr to support this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Test connector against Flink 1.16.2, 1.17.1, 1.18.0 instead of Flink SNAPSHOT version [flink-connector-mongodb]

2023-12-14 Thread via GitHub


ruanhang1993 commented on PR #20:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/20#issuecomment-1857189111

   @Jiabao-Sun Thanks for the quick reply.
   I am not sure whether this PR is correct. It seems like every connector has 
different settings for the CI to use the SNAPSHOT version.
   
   - JDBC connector only uses the SNAPSHOT versions for main branch in 
push_pr.yml and weekly.yml.
   - ES connector uses uses the SNAPSHOT and release versions for main branch 
in push_pr.yml and only uses the SNAPSHOT versions for main branch in  
weekly.yml.
   - Kafka connector uses uses the SNAPSHOT and release versions for main 
branch in push_pr.yml and  weekly.yml.
   
   I will see more about this and maybe discuss it in the dev mail list.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33500][Runtime] Run storing the JobGraph an asynchronous operation [flink]

2023-12-14 Thread via GitHub


zhengzhili333 commented on PR #23880:
URL: https://github.com/apache/flink/pull/23880#issuecomment-1857188917

   Thank you for your suggestion, I have modified that interface and also 
corrected the point you mentioned earlier.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-30613] Improve resolving schema compatibility -- Milestone one [flink]

2023-12-14 Thread via GitHub


masteryhx commented on code in PR #21635:
URL: https://github.com/apache/flink/pull/21635#discussion_r1427497404


##
flink-core/src/main/java/org/apache/flink/api/common/typeutils/TypeSerializerSnapshot.java:
##
@@ -124,11 +124,40 @@ void readSnapshot(int readVersion, DataInputView in, 
ClassLoader userCodeClassLo
  * program's serializer re-serializes the data, thus converting the format 
during the restore
  * operation.
  *
+ * @deprecated This method has been replaced by {@link 
TypeSerializerSnapshot
+ * #resolveSchemaCompatibility(TypeSerializerSnapshot)}.
  * @param newSerializer the new serializer to check.
  * @return the serializer compatibility result.
  */
-TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
-TypeSerializer newSerializer);
+@Deprecated
+default TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
+TypeSerializer newSerializer) {
+return 
newSerializer.snapshotConfiguration().resolveSchemaCompatibility(this);
+}
+
+/**
+ * Checks current serializer's compatibility to read data written by the 
prior serializer.
+ *
+ * When a checkpoint/savepoint is restored, this method checks whether 
the serialization
+ * format of the data in the checkpoint/savepoint is compatible for the 
format of the serializer
+ * used by the program that restores the checkpoint/savepoint. The outcome 
can be that the
+ * serialization format is compatible, that the program's serializer needs 
to reconfigure itself
+ * (meaning to incorporate some information from the 
TypeSerializerSnapshot to be compatible),
+ * that the format is outright incompatible, or that a migration needed. 
In the latter case, the
+ * TypeSerializerSnapshot produces a serializer to deserialize the data, 
and the restoring
+ * program's serializer re-serializes the data, thus converting the format 
during the restore
+ * operation.
+ *
+ * This method must be implemented to clarify the compatibility. See 
FLIP-263 for more
+ * details.
+ *
+ * @param oldSerializerSnapshot the old serializer snapshot to check.
+ * @return the serializer compatibility result.
+ */
+default TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
+TypeSerializerSnapshot oldSerializerSnapshot) {
+return 
oldSerializerSnapshot.resolveSchemaCompatibility(restoreSerializer());

Review Comment:
   > IIUC, Flink just calls the new 
resolveSchemaCompatibility(TypeSerializerSnapshot) method in the future, right?
   
   Yes, Flink could make sure this.
   But IIUC, `TypeSerializerSnapshot` and `TypeSerializer` are public API which 
means they maybe also used in the user code. Then users still have to modify 
their codes which may break changes.
   
   > BTW, the code comment should guide users or developers to implement the 
new resolveSchemaCompatibility(TypeSerializerSnapshot) method in the future.
   
   Thanks for the advice. That's a good idea.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33449) Support array_contains_seq function

2023-12-14 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796978#comment-17796978
 ] 

xuyang commented on FLINK-33449:


Hi, [~leoyy] , thanks for your explanation.

After your description, I think the name ARRAY_CONTAINS_SEQ is OK. From a more 
common perspective, ARRAY_CONTAINS_ALL may be more general. But without this 
background, it would be difficult to distinguish them based on their names and 
correctly recognize their respective abilities, which is not user-friendly.

I'm neutral on starting a short discussion in dev maillist to see if others 
have better ideas. WDYT?

> Support array_contains_seq function
> ---
>
> Key: FLINK-33449
> URL: https://issues.apache.org/jira/browse/FLINK-33449
> Project: Flink
>  Issue Type: New Feature
>Reporter: leo cheng
>Priority: Minor
>  Labels: pull-request-available
>
> support function array_contains_seq like trino contains_sequence
> trino: 
> https://trino.io/docs/current/functions/array.html?highlight=contains_sequence#contains_sequence



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32881][checkpoint] Support triggering savepoint in detach mode for CLI and dumping all pending savepoint ids by rest api [flink]

2023-12-14 Thread via GitHub


xiangforever2014 commented on code in PR #23253:
URL: https://github.com/apache/flink/pull/23253#discussion_r1427489342


##
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java:
##
@@ -164,6 +164,14 @@ public class CliFrontendParser {
 false,
 "Defines whether to trigger this checkpoint as a full 
one.");
 
+static final Option SAVEPOINT_DETACH_OPTION =

Review Comment:
   1. Good advice, I think using 'detached' option here can make it easily for 
users to understand, thanks for comment.
   2. Yes, I have notice this point too, maybe we could finish this ticket 
first ~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32028][connectors/elasticsearch] Allow customising bulk failure handling [flink-connector-elasticsearch]

2023-12-14 Thread via GitHub


reswqa commented on code in PR #83:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/83#discussion_r1427487497


##
flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSinkBuilderBaseTest.java:
##
@@ -99,7 +119,120 @@ void testThrowIfSetInvalidTimeouts() {
 .isInstanceOf(IllegalStateException.class);
 }
 
+@Test
+void testOverrideFailureHandler() {
+final FailureHandler failureHandler = (failure) -> {};
+final ElasticsearchSink sink =
+
createMinimalBuilder().setFailureHandler(failureHandler).build();
+
+final InitContext sinkInitContext = new MockInitContext();
+final BulkResponseInspector bulkResponseInspector =
+
sink.getBulkResponseInspectorFactory().apply(sinkInitContext::metricGroup);
+assertThat(bulkResponseInspector)
+.isInstanceOf(DefaultBulkResponseInspector.class)
+.extracting(
+(inspector) -> ((DefaultBulkResponseInspector) 
inspector).failureHandler)
+.isEqualTo(failureHandler);
+}
+
+@Test
+void testOverrideBulkResponseInspectorFactory() {
+final AtomicBoolean called = new AtomicBoolean();
+final BulkResponseInspectorFactory bulkResponseInspectorFactory =
+initContext -> {
+final MetricGroup metricGroup = initContext.metricGroup();
+metricGroup.addGroup("bulk").addGroup("result", 
"failed").counter("actions");
+called.set(true);
+return (BulkResponseInspector) (request, response) -> {};
+};
+final ElasticsearchSink sink =
+createMinimalBuilder()
+
.setBulkResponseInspectorFactory(bulkResponseInspectorFactory)
+.build();
+
+final InitContext sinkInitContext = new MockInitContext();
+
+assertThatCode(() -> 
sink.createWriter(sinkInitContext)).doesNotThrowAnyException();
+assertThat(called).isTrue();
+}
+
 abstract B createEmptyBuilder();
 
 abstract B createMinimalBuilder();
+
+private static class DummyMailboxExecutor implements MailboxExecutor {
+private DummyMailboxExecutor() {}
+
+public void execute(
+ThrowingRunnable command,
+String descriptionFormat,
+Object... descriptionArgs) {}
+
+public void yield() throws InterruptedException, FlinkRuntimeException 
{}
+
+public boolean tryYield() throws FlinkRuntimeException {
+return false;
+}
+}
+
+private static class MockInitContext
+implements Sink.InitContext, 
SerializationSchema.InitializationContext {
+
+public UserCodeClassLoader getUserCodeClassLoader() {
+return SimpleUserCodeClassLoader.create(
+ElasticsearchSinkBuilderBaseTest.class.getClassLoader());
+}
+
+public MailboxExecutor getMailboxExecutor() {
+return new ElasticsearchSinkBuilderBaseTest.DummyMailboxExecutor();
+}
+
+public ProcessingTimeService getProcessingTimeService() {
+return new TestProcessingTimeService();
+}
+
+public int getSubtaskId() {
+return 0;
+}
+
+public int getNumberOfParallelSubtasks() {
+return 0;
+}
+
+public int getAttemptNumber() {
+return 0;
+}
+
+public SinkWriterMetricGroup metricGroup() {

Review Comment:
   We should register `IOMetricGroup` also, it will throw NPE otherwise.
   
   ```java
   public SinkWriterMetricGroup metricGroup() {
   final OperatorIOMetricGroup operatorIOMetricGroup =
   
UnregisteredMetricGroups.createUnregisteredOperatorMetricGroup()
   .getIOMetricGroup();
   return InternalSinkWriterMetricGroup.wrap(
   new TestingSinkWriterMetricGroup.Builder()
   .setParentMetricGroup(new 
UnregisteredMetricsGroup())
   .setIoMetricGroupSupplier(() -> 
operatorIOMetricGroup)
   .build());
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32028][connectors/elasticsearch] Allow customising bulk failure handling [flink-connector-elasticsearch]

2023-12-14 Thread via GitHub


reswqa commented on code in PR #83:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/83#discussion_r1427487497


##
flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSinkBuilderBaseTest.java:
##
@@ -99,7 +119,120 @@ void testThrowIfSetInvalidTimeouts() {
 .isInstanceOf(IllegalStateException.class);
 }
 
+@Test
+void testOverrideFailureHandler() {
+final FailureHandler failureHandler = (failure) -> {};
+final ElasticsearchSink sink =
+
createMinimalBuilder().setFailureHandler(failureHandler).build();
+
+final InitContext sinkInitContext = new MockInitContext();
+final BulkResponseInspector bulkResponseInspector =
+
sink.getBulkResponseInspectorFactory().apply(sinkInitContext::metricGroup);
+assertThat(bulkResponseInspector)
+.isInstanceOf(DefaultBulkResponseInspector.class)
+.extracting(
+(inspector) -> ((DefaultBulkResponseInspector) 
inspector).failureHandler)
+.isEqualTo(failureHandler);
+}
+
+@Test
+void testOverrideBulkResponseInspectorFactory() {
+final AtomicBoolean called = new AtomicBoolean();
+final BulkResponseInspectorFactory bulkResponseInspectorFactory =
+initContext -> {
+final MetricGroup metricGroup = initContext.metricGroup();
+metricGroup.addGroup("bulk").addGroup("result", 
"failed").counter("actions");
+called.set(true);
+return (BulkResponseInspector) (request, response) -> {};
+};
+final ElasticsearchSink sink =
+createMinimalBuilder()
+
.setBulkResponseInspectorFactory(bulkResponseInspectorFactory)
+.build();
+
+final InitContext sinkInitContext = new MockInitContext();
+
+assertThatCode(() -> 
sink.createWriter(sinkInitContext)).doesNotThrowAnyException();
+assertThat(called).isTrue();
+}
+
 abstract B createEmptyBuilder();
 
 abstract B createMinimalBuilder();
+
+private static class DummyMailboxExecutor implements MailboxExecutor {
+private DummyMailboxExecutor() {}
+
+public void execute(
+ThrowingRunnable command,
+String descriptionFormat,
+Object... descriptionArgs) {}
+
+public void yield() throws InterruptedException, FlinkRuntimeException 
{}
+
+public boolean tryYield() throws FlinkRuntimeException {
+return false;
+}
+}
+
+private static class MockInitContext
+implements Sink.InitContext, 
SerializationSchema.InitializationContext {
+
+public UserCodeClassLoader getUserCodeClassLoader() {
+return SimpleUserCodeClassLoader.create(
+ElasticsearchSinkBuilderBaseTest.class.getClassLoader());
+}
+
+public MailboxExecutor getMailboxExecutor() {
+return new ElasticsearchSinkBuilderBaseTest.DummyMailboxExecutor();
+}
+
+public ProcessingTimeService getProcessingTimeService() {
+return new TestProcessingTimeService();
+}
+
+public int getSubtaskId() {
+return 0;
+}
+
+public int getNumberOfParallelSubtasks() {
+return 0;
+}
+
+public int getAttemptNumber() {
+return 0;
+}
+
+public SinkWriterMetricGroup metricGroup() {

Review Comment:
   We should register `IOMetricGroup` also, it will throw NPE otherwise.
   
   ```java
   public SinkWriterMetricGroup metricGroup() {
   final OperatorIOMetricGroup operatorIOMetricGroup =
   
UnregisteredMetricGroups.createUnregisteredOperatorMetricGroup()
   .getIOMetricGroup();
   return InternalSinkWriterMetricGroup.wrap(
   new TestingSinkWriterMetricGroup.Builder()
   .setParentMetricGroup(new 
UnregisteredMetricsGroup())
   .setIoMetricGroupSupplier(() -> 
operatorIOMetricGroup)
   .build());
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32028][connectors/elasticsearch] Allow customising bulk failure handling [flink-connector-elasticsearch]

2023-12-14 Thread via GitHub


reswqa commented on code in PR #83:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/83#discussion_r1427487497


##
flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSinkBuilderBaseTest.java:
##
@@ -99,7 +119,120 @@ void testThrowIfSetInvalidTimeouts() {
 .isInstanceOf(IllegalStateException.class);
 }
 
+@Test
+void testOverrideFailureHandler() {
+final FailureHandler failureHandler = (failure) -> {};
+final ElasticsearchSink sink =
+
createMinimalBuilder().setFailureHandler(failureHandler).build();
+
+final InitContext sinkInitContext = new MockInitContext();
+final BulkResponseInspector bulkResponseInspector =
+
sink.getBulkResponseInspectorFactory().apply(sinkInitContext::metricGroup);
+assertThat(bulkResponseInspector)
+.isInstanceOf(DefaultBulkResponseInspector.class)
+.extracting(
+(inspector) -> ((DefaultBulkResponseInspector) 
inspector).failureHandler)
+.isEqualTo(failureHandler);
+}
+
+@Test
+void testOverrideBulkResponseInspectorFactory() {
+final AtomicBoolean called = new AtomicBoolean();
+final BulkResponseInspectorFactory bulkResponseInspectorFactory =
+initContext -> {
+final MetricGroup metricGroup = initContext.metricGroup();
+metricGroup.addGroup("bulk").addGroup("result", 
"failed").counter("actions");
+called.set(true);
+return (BulkResponseInspector) (request, response) -> {};
+};
+final ElasticsearchSink sink =
+createMinimalBuilder()
+
.setBulkResponseInspectorFactory(bulkResponseInspectorFactory)
+.build();
+
+final InitContext sinkInitContext = new MockInitContext();
+
+assertThatCode(() -> 
sink.createWriter(sinkInitContext)).doesNotThrowAnyException();
+assertThat(called).isTrue();
+}
+
 abstract B createEmptyBuilder();
 
 abstract B createMinimalBuilder();
+
+private static class DummyMailboxExecutor implements MailboxExecutor {
+private DummyMailboxExecutor() {}
+
+public void execute(
+ThrowingRunnable command,
+String descriptionFormat,
+Object... descriptionArgs) {}
+
+public void yield() throws InterruptedException, FlinkRuntimeException 
{}
+
+public boolean tryYield() throws FlinkRuntimeException {
+return false;
+}
+}
+
+private static class MockInitContext
+implements Sink.InitContext, 
SerializationSchema.InitializationContext {
+
+public UserCodeClassLoader getUserCodeClassLoader() {
+return SimpleUserCodeClassLoader.create(
+ElasticsearchSinkBuilderBaseTest.class.getClassLoader());
+}
+
+public MailboxExecutor getMailboxExecutor() {
+return new ElasticsearchSinkBuilderBaseTest.DummyMailboxExecutor();
+}
+
+public ProcessingTimeService getProcessingTimeService() {
+return new TestProcessingTimeService();
+}
+
+public int getSubtaskId() {
+return 0;
+}
+
+public int getNumberOfParallelSubtasks() {
+return 0;
+}
+
+public int getAttemptNumber() {
+return 0;
+}
+
+public SinkWriterMetricGroup metricGroup() {

Review Comment:
   We should register IOMetricGroup also, it will throw NPE otherwise.
   
   ```java
   public SinkWriterMetricGroup metricGroup() {
   final OperatorIOMetricGroup operatorIOMetricGroup =
   
UnregisteredMetricGroups.createUnregisteredOperatorMetricGroup().getIOMetricGroup();
   return InternalSinkWriterMetricGroup.wrap(
   new TestingSinkWriterMetricGroup.Builder()
   .setParentMetricGroup(new 
UnregisteredMetricsGroup())
   .setIoMetricGroupSupplier(() 
->operatorIOMetricGroup)
   .build());
   }
   ```



##
flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSinkBuilderBaseTest.java:
##
@@ -99,7 +119,120 @@ void testThrowIfSetInvalidTimeouts() {
 .isInstanceOf(IllegalStateException.class);
 }
 
+@Test
+void testOverrideFailureHandler() {
+final FailureHandler failureHandler = (failure) -> {};
+final ElasticsearchSink sink =
+
createMinimalBuilder().setFailureHandler(failureHandler).build();
+
+final InitContext sinkInitContext = new MockInitContext();
+final BulkResponseInspector bulkResponseInspector =
+

Re: [PR] [FLINK-32028][connectors/elasticsearch] Allow customising bulk failure handling [flink-connector-elasticsearch]

2023-12-14 Thread via GitHub


reswqa commented on code in PR #83:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/83#discussion_r1427487497


##
flink-connector-elasticsearch-base/src/test/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSinkBuilderBaseTest.java:
##
@@ -99,7 +119,120 @@ void testThrowIfSetInvalidTimeouts() {
 .isInstanceOf(IllegalStateException.class);
 }
 
+@Test
+void testOverrideFailureHandler() {
+final FailureHandler failureHandler = (failure) -> {};
+final ElasticsearchSink sink =
+
createMinimalBuilder().setFailureHandler(failureHandler).build();
+
+final InitContext sinkInitContext = new MockInitContext();
+final BulkResponseInspector bulkResponseInspector =
+
sink.getBulkResponseInspectorFactory().apply(sinkInitContext::metricGroup);
+assertThat(bulkResponseInspector)
+.isInstanceOf(DefaultBulkResponseInspector.class)
+.extracting(
+(inspector) -> ((DefaultBulkResponseInspector) 
inspector).failureHandler)
+.isEqualTo(failureHandler);
+}
+
+@Test
+void testOverrideBulkResponseInspectorFactory() {
+final AtomicBoolean called = new AtomicBoolean();
+final BulkResponseInspectorFactory bulkResponseInspectorFactory =
+initContext -> {
+final MetricGroup metricGroup = initContext.metricGroup();
+metricGroup.addGroup("bulk").addGroup("result", 
"failed").counter("actions");
+called.set(true);
+return (BulkResponseInspector) (request, response) -> {};
+};
+final ElasticsearchSink sink =
+createMinimalBuilder()
+
.setBulkResponseInspectorFactory(bulkResponseInspectorFactory)
+.build();
+
+final InitContext sinkInitContext = new MockInitContext();
+
+assertThatCode(() -> 
sink.createWriter(sinkInitContext)).doesNotThrowAnyException();
+assertThat(called).isTrue();
+}
+
 abstract B createEmptyBuilder();
 
 abstract B createMinimalBuilder();
+
+private static class DummyMailboxExecutor implements MailboxExecutor {
+private DummyMailboxExecutor() {}
+
+public void execute(
+ThrowingRunnable command,
+String descriptionFormat,
+Object... descriptionArgs) {}
+
+public void yield() throws InterruptedException, FlinkRuntimeException 
{}
+
+public boolean tryYield() throws FlinkRuntimeException {
+return false;
+}
+}
+
+private static class MockInitContext
+implements Sink.InitContext, 
SerializationSchema.InitializationContext {
+
+public UserCodeClassLoader getUserCodeClassLoader() {
+return SimpleUserCodeClassLoader.create(
+ElasticsearchSinkBuilderBaseTest.class.getClassLoader());
+}
+
+public MailboxExecutor getMailboxExecutor() {
+return new ElasticsearchSinkBuilderBaseTest.DummyMailboxExecutor();
+}
+
+public ProcessingTimeService getProcessingTimeService() {
+return new TestProcessingTimeService();
+}
+
+public int getSubtaskId() {
+return 0;
+}
+
+public int getNumberOfParallelSubtasks() {
+return 0;
+}
+
+public int getAttemptNumber() {
+return 0;
+}
+
+public SinkWriterMetricGroup metricGroup() {

Review Comment:
   We should register IOMetricGroup also.
   
   ```java
   public SinkWriterMetricGroup metricGroup() {
   final OperatorIOMetricGroup operatorIOMetricGroup =
   
UnregisteredMetricGroups.createUnregisteredOperatorMetricGroup().getIOMetricGroup();
   return InternalSinkWriterMetricGroup.wrap(
   new TestingSinkWriterMetricGroup.Builder()
   .setParentMetricGroup(new 
UnregisteredMetricsGroup())
   .setIoMetricGroupSupplier(() 
->operatorIOMetricGroup)
   .build());
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32881][checkpoint] Support triggering savepoint in detach mode for CLI and dumping all pending savepoint ids by rest api [flink]

2023-12-14 Thread via GitHub


xiangforever2014 commented on code in PR #23253:
URL: https://github.com/apache/flink/pull/23253#discussion_r1427486664


##
flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java:
##
@@ -188,6 +207,21 @@ CompletableFuture triggerSavepoint(
  */
 CompletableFuture triggerCheckpoint(JobID jobId, CheckpointType 
checkpointType);
 
+/**
+ * Triggers a detached savepoint for the job identified by the job id. The 
savepoint will be
+ * written to the given savepoint directory, or {@link
+ * 
org.apache.flink.configuration.CheckpointingOptions#SAVEPOINT_DIRECTORY} if it 
is null.
+ * Notice that: detach savepoint will return with a savepoint trigger id 
instead of the path
+ * future, that means the client will return very quickly.
+ *
+ * @param jobId job id
+ * @param savepointDirectory directory the savepoint should be written to
+ * @param formatType a binary format of the savepoint
+ * @return The savepoint trigger id
+ */
+CompletableFuture triggerDetachSavepoint(

Review Comment:
   Thanks for your careful comments, I have rename it to "detached" and checked 
all other places~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33795] Add a new config to forbid autoscale execution in the configured excluded periods [flink-kubernetes-operator]

2023-12-14 Thread via GitHub


1996fanrui commented on code in PR #728:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/728#discussion_r1427486350


##
flink-autoscaler/pom.xml:
##
@@ -66,6 +66,12 @@ under the License.
 jackson-databind
 
 
+
+org.quartz-scheduler
+quartz
+2.3.2

Review Comment:
   Would you mind extract a property for this version? Such as : 
`quartz.version`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32881][checkpoint] Support triggering savepoint in detach mode for CLI and dumping all pending savepoint ids by rest api [flink]

2023-12-14 Thread via GitHub


xiangforever2014 commented on code in PR #23253:
URL: https://github.com/apache/flink/pull/23253#discussion_r1427483790


##
docs/content.zh/docs/deployment/cli.md:
##
@@ -125,6 +125,43 @@ Lastly, you can optionally provide what should be the 
[binary format]({{< ref "d
 
 The path to the savepoint can be used later on to [restart the Flink 
job](#starting-a-job-from-a-savepoint).
 
+If the state of the job is quite big, the client will get a timeout exception 
since it should wait for the savepoint finished.
+```
+Triggering savepoint for job bec5244e09634ad71a80785937a9732d.
+Waiting for response...
+
+--
+The program finished with the following exception:
+
+org.apache.flink.util.FlinkException: Triggering a savepoint for the job 
bec5244e09634ad71a80785937a9732d failed.
+at 
org.apache.flink.client.cli.CliFrontend.triggerSavepoint(CliFrontend. java:828)
+at 
org.apache.flink.client.cli.CliFrontend.lambda$savepopint$8(CliFrontend.java:794)
+at 
org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:1078)
+at 
org.apache.flink.client.cli.CliFrontend.savepoint(CliFrontend.java:779)
+at 
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1150)
+at 
org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1226)
+at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
+at 
org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1226)
+at org.apache.flink.client.cli.CliFrontend.main(CliFronhtend.java:1194)
+Caused by: java.util.concurrent.TimeoutException
+at 
java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
+at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
+at 
org.apache.flink.client.cli.CliFrontend.triggerSavepoint(CliFrontend.java:822)
+... 8 more
+```
+In this case, we could use "-dsp" option to trigger a detached savepoint, the 
client will return immediately as soon as the trigger id returns.
+```bash
+$ ./bin/flink savepoint \
+  $JOB_ID \ 
+  /tmp/flink-savepoints
+  -dsp
+```
+```
+Triggering savepoint in detached mode for job bec5244e09634ad71a80785937a9732d.
+Successfully trigger manual savepoint, triggerId: 
2505bbd12c5b58fd997d0f193db44b97
+```
+We can get the status of the detached savepoint by rest api 
"/jobs/:jobid/savepoints/:triggerid".

Review Comment:
   done~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33819) Support setting CompressType in RocksDBStateBackend

2023-12-14 Thread xiangyu feng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796977#comment-17796977
 ] 

xiangyu feng commented on FLINK-33819:
--

+1 for this. This will save lots of cpu usage for jobs with less state space 
usage but high state access frequency.

> Support setting CompressType in RocksDBStateBackend
> ---
>
> Key: FLINK-33819
> URL: https://issues.apache.org/jira/browse/FLINK-33819
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Yue Ma
>Priority: Major
> Fix For: 1.19.0
>
> Attachments: image-2023-12-14-11-32-32-968.png, 
> image-2023-12-14-11-35-22-306.png
>
>
> Currently, RocksDBStateBackend does not support setting the compression 
> level, and Snappy is used for compression by default. But we have some 
> scenarios where compression will use a lot of CPU resources. Turning off 
> compression can significantly reduce CPU overhead. So we may need to support 
> a parameter for users to set the CompressType of Rocksdb.
>   !image-2023-12-14-11-35-22-306.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-30613] Improve resolving schema compatibility -- Milestone one [flink]

2023-12-14 Thread via GitHub


1996fanrui commented on code in PR #21635:
URL: https://github.com/apache/flink/pull/21635#discussion_r1427479694


##
flink-core/src/main/java/org/apache/flink/api/common/typeutils/TypeSerializerSnapshot.java:
##
@@ -124,11 +124,40 @@ void readSnapshot(int readVersion, DataInputView in, 
ClassLoader userCodeClassLo
  * program's serializer re-serializes the data, thus converting the format 
during the restore
  * operation.
  *
+ * @deprecated This method has been replaced by {@link 
TypeSerializerSnapshot
+ * #resolveSchemaCompatibility(TypeSerializerSnapshot)}.
  * @param newSerializer the new serializer to check.
  * @return the serializer compatibility result.
  */
-TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
-TypeSerializer newSerializer);
+@Deprecated
+default TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
+TypeSerializer newSerializer) {
+return 
newSerializer.snapshotConfiguration().resolveSchemaCompatibility(this);
+}
+
+/**
+ * Checks current serializer's compatibility to read data written by the 
prior serializer.
+ *
+ * When a checkpoint/savepoint is restored, this method checks whether 
the serialization
+ * format of the data in the checkpoint/savepoint is compatible for the 
format of the serializer
+ * used by the program that restores the checkpoint/savepoint. The outcome 
can be that the
+ * serialization format is compatible, that the program's serializer needs 
to reconfigure itself
+ * (meaning to incorporate some information from the 
TypeSerializerSnapshot to be compatible),
+ * that the format is outright incompatible, or that a migration needed. 
In the latter case, the
+ * TypeSerializerSnapshot produces a serializer to deserialize the data, 
and the restoring
+ * program's serializer re-serializes the data, thus converting the format 
during the restore
+ * operation.
+ *
+ * This method must be implemented to clarify the compatibility. See 
FLIP-263 for more
+ * details.
+ *
+ * @param oldSerializerSnapshot the old serializer snapshot to check.
+ * @return the serializer compatibility result.
+ */
+default TypeSerializerSchemaCompatibility resolveSchemaCompatibility(
+TypeSerializerSnapshot oldSerializerSnapshot) {
+return 
oldSerializerSnapshot.resolveSchemaCompatibility(restoreSerializer());

Review Comment:
   After thinking more about it, I'm wondering if we don't need a compatibility 
implementation for the old `resolveSchemaCompatibility(TypeSerializer)` method.
   
   IIUC, Flink just calls the new 
`resolveSchemaCompatibility(TypeSerializerSnapshot)` method in the future, 
right?
   
   If so, we can add a default implementation for the old 
`resolveSchemaCompatibility(TypeSerializer)` method, the default implementation 
throwes `UnsupportedException`. We need to add a default implementation because 
users or developers don't need to implement it in the future.
   
   For the new `resolveSchemaCompatibility(TypeSerializerSnapshot)` method, you 
current implementation is fine. All old implementation class can be supported 
if `resolveSchemaCompatibility(TypeSerializerSnapshot)` calls the old one.
   
   BTW, the code comment should guide users or developers to implement the new 
`resolveSchemaCompatibility(TypeSerializerSnapshot)` method in the future.
   
   WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33764] Track Heap usage and GC pressure to avoid unnecessary scaling [flink-kubernetes-operator]

2023-12-14 Thread via GitHub


1996fanrui commented on code in PR #726:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/726#discussion_r1427474826


##
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/config/AutoScalerOptions.java:
##
@@ -219,6 +219,22 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
 .withDescription(
 "Processing rate increase threshold for detecting 
ineffective scaling threshold. 0.1 means if we do not accomplish at least 10% 
of the desired capacity increase with scaling, the action is marked 
ineffective.");
 
+public static final ConfigOption GC_PRESSURE_THRESHOLD =
+autoScalerConfig("memory.gc-pressure.threshold")
+.doubleType()
+.defaultValue(0.3)
+
.withFallbackKeys(oldOperatorConfigKey("memory.gc-pressure.threshold"))
+.withDescription(
+"Max allowed GC pressure (percentage spent garbage 
collecting) during scaling operations. Autoscaling will be paused if the GC 
pressure exceeds this limit.");
+
+public static final ConfigOption HEAP_USAGE_THRESHOLD =
+autoScalerConfig("memory.heap-usage.threshold")
+.doubleType()
+.defaultValue(1.)
+
.withFallbackKeys(oldOperatorConfigKey("memory.heap-usage.threshold"))

Review Comment:
   Thanks for the clarification!
   
   Let's respect this rule in the future~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-32862) Support INIT operation type to be compatible with DTS on Alibaba Cloud

2023-12-14 Thread Hang Ruan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hang Ruan closed FLINK-32862.
-
Resolution: Won't Fix

> Support INIT operation type to be compatible with DTS on Alibaba Cloud
> --
>
> Key: FLINK-32862
> URL: https://issues.apache.org/jira/browse/FLINK-32862
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Hang Ruan
>Assignee: Hang Ruan
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>
> The operation type of canal json messages from DTS on Alibaba Cloud may 
> contain a new type `INIT`. We cannot handle these messages.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33809] HiveCatalog load hivemetastore-site.xml [flink]

2023-12-14 Thread via GitHub


cuibo01 commented on PR #23924:
URL: https://github.com/apache/flink/pull/23924#issuecomment-1857131919

hi @luoyuxia Can you review the pr ? Thank you


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33756) Missing record with CUMULATE/HOP windows using an optimization

2023-12-14 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796949#comment-17796949
 ] 

Jim Hughes commented on FLINK-33756:


Hi [~jeyhunkarimov], I am not actively working on it; I'll assign it to you for 
now.

I think there are two (likely related) things going on:  First, some watermark 
is getting miscomputed, and second, the hashing happening in the exchange step 
is allowing things to happen in either order.

>From the initial time that I looked into this, I also ran across 
>`TimeWindow.getWindowStartWithOffset`.  I noticed that this method is being 
>called with an offset of 0L in `TimeWindowUtil.getNextTriggerWatermark`.  I 
>cannot be 100% sure that's the problem, but that's the next place I'd be 
>checking if I were to continue looking!

 

> Missing record with CUMULATE/HOP windows using an optimization
> --
>
> Key: FLINK-33756
> URL: https://issues.apache.org/jira/browse/FLINK-33756
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Priority: Major
>
> I have seen an optimization cause a window fail to emit a record.
> With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to 
> true, 
> the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or 
> CUMULATE window with an offset, a record can be sent which causes one of the 
> multiple active windows to fail to emit a record.
> The linked code 
> (https://github.com/jnh5y/flink/commit/ec90aa501d86f95559f8b22b0610e9fb786f05d4)
>  modifies the `WindowAggregateJsonITCase` to demonstrate the case.  
>  
> The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
> `testDistinctSplitEnabled` tests the above configurations and shows that one 
> record is missing from the output.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33756) Missing record with CUMULATE/HOP windows using an optimization

2023-12-14 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796939#comment-17796939
 ] 

Jeyhun Karimov commented on FLINK-33756:


[~jhughes] Yes it is just printing/logging on various places of the codebase.
If you are not working on this issue (or if it is not sth urgent), you can 
assign it to me, will try to come up with deterministic solution to avoid the 
flakiness. 

> Missing record with CUMULATE/HOP windows using an optimization
> --
>
> Key: FLINK-33756
> URL: https://issues.apache.org/jira/browse/FLINK-33756
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Priority: Major
>
> I have seen an optimization cause a window fail to emit a record.
> With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to 
> true, 
> the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or 
> CUMULATE window with an offset, a record can be sent which causes one of the 
> multiple active windows to fail to emit a record.
> The linked code 
> (https://github.com/jnh5y/flink/commit/ec90aa501d86f95559f8b22b0610e9fb786f05d4)
>  modifies the `WindowAggregateJsonITCase` to demonstrate the case.  
>  
> The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
> `testDistinctSplitEnabled` tests the above configurations and shows that one 
> record is missing from the output.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33836) Propose a pull request for website updates

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33836:

Description: 
The final step of building the candidate is to propose a website pull request 
containing the following changes:
 * update docs/data/flink.yml

 ** Add a new major version or update minor version as required
 * update docs/data/release_archive.yml
 * update version references in quickstarts ({{{}q/{}}} directory) as required 
(outdated?)
 * add a blog post announcing the release in {{docs/content/posts}}

(!) Don’t merge the PRs before finalizing the release.

 

h3. Expectations
 * Website pull request proposed to list the 
[release|http://flink.apache.org/downloads.html]

  was:
The final step of building the candidate is to propose a website pull request 
containing the following changes:
 # update 
[apache/flink-web:_config.yml|https://github.com/apache/flink-web/blob/asf-site/_config.yml]
 ## update {{FLINK_VERSION_STABLE}} and {{FLINK_VERSION_STABLE_SHORT}} as 
required
 ## update version references in quickstarts ({{{}q/{}}} directory) as required
 ## (major only) add a new entry to {{flink_releases}} for the release binaries 
and sources
 ## (minor only) update the entry for the previous release in the series in 
{{flink_releases}}
 ### Please pay notice to the ids assigned to the download entries. They should 
be unique and reflect their corresponding version number.
 ## add a new entry to {{release_archive.flink}}
 # add a blog post announcing the release in _posts
 # add a organized release notes page under docs/content/release-notes and 
docs/content.zh/release-notes (like 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.15/]).
 The page is based on the non-empty release notes collected from the issues, 
and only the issues that affect existing users should be included (e.g., 
instead of new functionality). It should be in a separate PR since it would be 
merged to the flink project.

(!) Don’t merge the PRs before finalizing the release.

 

h3. Expectations
 * Website pull request proposed to list the 
[release|http://flink.apache.org/downloads.html]
 * (major only) Check {{docs/config.toml}} to ensure that
 ** the version constants refer to the new version
 ** the {{baseurl}} does not point to {{flink-docs-master}}  but 
{{flink-docs-release-X.Y}} instead


> Propose a pull request for website updates
> --
>
> Key: FLINK-33836
> URL: https://issues.apache.org/jira/browse/FLINK-33836
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.18.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> The final step of building the candidate is to propose a website pull request 
> containing the following changes:
>  * update docs/data/flink.yml
>  ** Add a new major version or update minor version as required
>  * update docs/data/release_archive.yml
>  * update version references in quickstarts ({{{}q/{}}} directory) as 
> required (outdated?)
>  * add a blog post announcing the release in {{docs/content/posts}}
> (!) Don’t merge the PRs before finalizing the release.
>  
> 
> h3. Expectations
>  * Website pull request proposed to list the 
> [release|http://flink.apache.org/downloads.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33835) Stage source and binary releases on dist.apache.org

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33835.
-
Resolution: Fixed

> Stage source and binary releases on dist.apache.org
> ---
>
> Key: FLINK-33835
> URL: https://issues.apache.org/jira/browse/FLINK-33835
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> Copy the source release to the dev repository of dist.apache.org:
> # If you have not already, check out the Flink section of the dev repository 
> on dist.apache.org via Subversion. In a fresh directory:
> {code:bash}
> $ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
> {code}
> # Make a directory for the new release and copy all the artifacts (Flink 
> source/binary distributions, hashes, GPG signatures and the python 
> subdirectory) into that newly created directory:
> {code:bash}
> $ mkdir flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
> $ mv /tools/releasing/release/* 
> flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
> # Add and commit all the files.
> {code:bash}
> $ cd flink
> flink $ svn add flink-${RELEASE_VERSION}-rc${RC_NUM}
> flink $ svn commit -m "Add flink-${RELEASE_VERSION}-rc${RC_NUM}"
> {code}
> # Verify that files are present under 
> [https://dist.apache.org/repos/dist/dev/flink|https://dist.apache.org/repos/dist/dev/flink].
> # Push the release tag if not done already (the following command assumes to 
> be called from within the apache/flink checkout):
> {code:bash}
> $ git push  refs/tags/release-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
>  
> 
> h3. Expectations
>  * Maven artifacts deployed to the staging repository of 
> [repository.apache.org|https://repository.apache.org/content/repositories/]
>  * Source distribution deployed to the dev repository of 
> [dist.apache.org|https://dist.apache.org/repos/dist/dev/flink/]
>  * Check hashes (e.g. shasum -c *.sha512)
>  * Check signatures (e.g. {{{}gpg --verify 
> flink-1.2.3-source-release.tar.gz.asc flink-1.2.3-source-release.tar.gz{}}})
>  * {{grep}} for legal headers in each file.
>  * If time allows check the NOTICE files of the modules whose dependencies 
> have been changed in this release in advance, since the license issues from 
> time to time pop up during voting. See [Verifying a Flink 
> Release|https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release]
>  "Checking License" section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33834) Build and stage Java and Python artifacts

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33834.
-
Resolution: Fixed

> Build and stage Java and Python artifacts
> -
>
> Key: FLINK-33834
> URL: https://issues.apache.org/jira/browse/FLINK-33834
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> # Create a local release branch ((!) this step can not be skipped for minor 
> releases):
> {code:bash}
> $ cd ./tools
> tools/ $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION NEW_VERSION=$RELEASE_VERSION 
> RELEASE_CANDIDATE=$RC_NUM releasing/create_release_branch.sh
> {code}
>  # Tag the release commit:
> {code:bash}
> $ git tag -s ${TAG} -m "${TAG}"
> {code}
>  # We now need to do several things:
>  ## Create the source release archive
>  ## Deploy jar artefacts to the [Apache Nexus 
> Repository|https://repository.apache.org/], which is the staging area for 
> deploying the jars to Maven Central
>  ## Build PyFlink wheel packages
> You might want to create a directory on your local machine for collecting the 
> various source and binary releases before uploading them. Creating the binary 
> releases is a lengthy process but you can do this on another machine (for 
> example, in the "cloud"). When doing this, you can skip signing the release 
> files on the remote machine, download them to your local machine and sign 
> them there.
>  # Build the source release:
> {code:bash}
> tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_source_release.sh
> {code}
>  # Stage the maven artifacts:
> {code:bash}
> tools $ releasing/deploy_staging_jars.sh
> {code}
> Review all staged artifacts ([https://repository.apache.org/]). They should 
> contain all relevant parts for each module, including pom.xml, jar, test jar, 
> source, test source, javadoc, etc. Carefully review any new artifacts.
>  # Close the staging repository on Apache Nexus. When prompted for a 
> description, enter “Apache Flink, version X, release candidate Y”.
> Then, you need to build the PyFlink wheel packages (since 1.11):
>  # Set up an azure pipeline in your own Azure account. You can refer to 
> [Azure 
> Pipelines|https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository]
>  for more details on how to set up azure pipeline for a fork of the Flink 
> repository. Note that a google cloud mirror in Europe is used for downloading 
> maven artifacts, therefore it is recommended to set your [Azure organization 
> region|https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location]
>  to Europe to speed up the downloads.
>  # Push the release candidate branch to your forked personal Flink 
> repository, e.g.
> {code:bash}
> tools $ git push  
> refs/heads/release-${RELEASE_VERSION}-rc${RC_NUM}:release-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
>  # Trigger the Azure Pipelines manually to build the PyFlink wheel packages
>  ## Go to your Azure Pipelines Flink project → Pipelines
>  ## Click the "New pipeline" button on the top right
>  ## Select "GitHub" → your GitHub Flink repository → "Existing Azure 
> Pipelines YAML file"
>  ## Select your branch → Set path to "/azure-pipelines.yaml" → click on 
> "Continue" → click on "Variables"
>  ## Then click "New Variable" button, fill the name with "MODE", and the 
> value with "release". Click "OK" to set the variable and the "Save" button to 
> save the variables, then back on the "Review your pipeline" screen click 
> "Run" to trigger the build.
>  ## You should now see a build where only the "CI build (release)" is running
>  # Download the PyFlink wheel packages from the build result page after the 
> jobs of "build_wheels mac" and "build_wheels linux" have finished.
>  ## Download the PyFlink wheel packages
>  ### Open the build result page of the pipeline
>  ### Go to the {{Artifacts}} page (build_wheels linux -> 1 artifact)
>  ### Click {{wheel_Darwin_build_wheels mac}} and {{wheel_Linux_build_wheels 
> linux}} separately to download the zip files
>  ## Unzip these two zip files
> {code:bash}
> $ cd /path/to/downloaded_wheel_packages
> $ unzip wheel_Linux_build_wheels\ linux.zip
> $ unzip wheel_Darwin_build_wheels\ mac.zip{code}
>  ## Create directory {{./dist}} under the directory of {{{}flink-python{}}}:
> {code:bash}
> $ cd 
> $ mkdir flink-python/dist{code}
>  ## Move the unzipped wheel packages to the directory of 
> {{{}flink-python/dist{}}}:
> {code:java}
> $ mv /path/to/wheel_Darwin_build_wheels\ mac/* flink-python/dist/
> $ mv /path/to/wheel_Linux_build_wheels\ linux/* flink-python/dist/
> $ cd tools{code}
> Finally, we create the binary convenience release files:
> {code:bash}
> tools $ 

Re: [PR] [hotfix] [tests] Per-class the lifecycle of mini-cluster in `ClientHeartbeatTest`. [flink]

2023-12-14 Thread via GitHub


TestBoost commented on PR #23910:
URL: https://github.com/apache/flink/pull/23910#issuecomment-1856543512

   Hi,
   
   So to clarify, would the preferred change be to start a cluster 
per-test-class and to share between test methods? Looking at the test class, 
perhaps this could be accomplished by sharing the `JobClient` object that each 
test method creates (though perhaps `testJobRunningIfDisableClientHeartbeat` 
cannot easily share due to a difference in configuration)? Or perhaps do you 
have another kind of change in mind?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33666) MergeTableLikeUtil uses different constraint name than Schema

2023-12-14 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796882#comment-17796882
 ] 

Jim Hughes commented on FLINK-33666:


[~twalthr] Yes, the issue has been resolved.

The issue was a change in expectations around PK.  From offline conversations, 
you assured me that the change is ok and will not impact folks using compiled 
plans.  I think we are good.

An example stack trace is:

```
Dec 07 13:49:11 13:49:11.037 [ERROR] 
org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest.testRestore(TableTestProgram,
 ExecNodeMetadata)[1] Time elapsed: 0.17 s <<< ERROR! 
Dec 07 13:49:11 org.apache.flink.table.api.TableException: Cannot load Plan 
from file 
'/__w/1/s/flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-group-aggregate_1/group-aggregate-simple/plan/group-aggregate-simple.json'.
 
Dec 07 13:49:11 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.loadPlan(TableEnvironmentImpl.java:760)
 
Dec 07 13:49:11 at 
org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:279)


Dec 07 13:49:11 Caused by: 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 The schema of table 'default_catalog.default_database.sink_t' from the 
persisted plan does not match the schema loaded from the catalog: '( 
Dec 07 13:49:11 `b` BIGINT NOT NULL, 
Dec 07 13:49:11 `cnt` BIGINT, 
Dec 07 13:49:11 `avg_a` DOUBLE, 
Dec 07 13:49:11 `min_c` STRING, 
Dec 07 13:49:11 CONSTRAINT `PK_129` PRIMARY KEY (`b`) NOT ENFORCED 
Dec 07 13:49:11 )' != '( 
Dec 07 13:49:11 `b` BIGINT NOT NULL, 
Dec 07 13:49:11 `cnt` BIGINT, 
Dec 07 13:49:11 `avg_a` DOUBLE, 
Dec 07 13:49:11 `min_c` STRING, 
Dec 07 13:49:11 CONSTRAINT `PK_b` PRIMARY KEY (`b`) NOT ENFORCED 
Dec 07 13:49:11 )'. Make sure the table schema in the catalog is still 
identical. (through reference chain: 
org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[5]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink["dynamicTableSink"]->org.apache.flink.table.planner.plan.nodes.exec.spec.DynamicTableSinkSpec["table"])
 
...
```

> MergeTableLikeUtil uses different constraint name than Schema
> -
>
> Key: FLINK-33666
> URL: https://issues.apache.org/jira/browse/FLINK-33666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Timo Walther
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> {{MergeTableLikeUtil}} uses a different algorithm to name constraints than 
> {{Schema}}. 
> {{Schema}} includes the column names.
> {{MergeTableLikeUtil}} uses a hashCode which means it might depend on JVM 
> specifics.
> For consistency we should use the same algorithm. I propose to use 
> {{Schema}}'s logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33666) MergeTableLikeUtil uses different constraint name than Schema

2023-12-14 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796862#comment-17796862
 ] 

Timo Walther commented on FLINK-33666:
--

[~jhughes] was this issue resolved? Was it just a problem of the test base? 
What was the complete exception stack trace?

> MergeTableLikeUtil uses different constraint name than Schema
> -
>
> Key: FLINK-33666
> URL: https://issues.apache.org/jira/browse/FLINK-33666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Timo Walther
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> {{MergeTableLikeUtil}} uses a different algorithm to name constraints than 
> {{Schema}}. 
> {{Schema}} includes the column names.
> {{MergeTableLikeUtil}} uses a hashCode which means it might depend on JVM 
> specifics.
> For consistency we should use the same algorithm. I propose to use 
> {{Schema}}'s logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27681) Improve the availability of Flink when the RocksDB file is corrupted.

2023-12-14 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796843#comment-17796843
 ] 

Piotr Nowojski commented on FLINK-27681:


I could help reviewing and generally speaking support this effort, including 
discussions around the FLIP (this change would have to modify `FileSystem` API 
- DFS has to support in someway checksum verification). The general plan should 
be:
 # Someone would have to investigate what's possible without modification of 
RocksDB, and works at least with S3. Some tests against S3 and raw benchmarks 
without using Flink would be needed. For example a simple standalone Java app, 
uploading the a while verifying the checksum at the same time. Definitely 
people will be asking what about other DFSs. It would be nice to do a research 
with Azure/GCP also (might be just looking into the docs without testing).
 # Create a FLIP. Ideally already with this zero-overhead solution. But if 
RocksDB would be problematic, something that's already designed with the 
zero-overhead in ming for the long term, and some intermediate temporary 
solution would be probably also fine.

> Improve the availability of Flink when the RocksDB file is corrupted.
> -
>
> Key: FLINK-27681
> URL: https://issues.apache.org/jira/browse/FLINK-27681
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Ming Li
>Assignee: Yue Ma
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-08-23-15-06-16-717.png
>
>
> We have encountered several times when the RocksDB checksum does not match or 
> the block verification fails when the job is restored. The reason for this 
> situation is generally that there are some problems with the machine where 
> the task is located, which causes the files uploaded to HDFS to be incorrect, 
> but it has been a long time (a dozen minutes to half an hour) when we found 
> this problem. I'm not sure if anyone else has had a similar problem.
> Since this file is referenced by incremental checkpoints for a long time, 
> when the maximum number of checkpoints reserved is exceeded, we can only use 
> this file until it is no longer referenced. When the job failed, it cannot be 
> recovered.
> Therefore we consider:
> 1. Can RocksDB periodically check whether all files are correct and find the 
> problem in time?
> 2. Can Flink automatically roll back to the previous checkpoint when there is 
> a problem with the checkpoint data, because even with manual intervention, it 
> just tries to recover from the existing checkpoint or discard the entire 
> state.
> 3. Can we increase the maximum number of references to a file based on the 
> maximum number of checkpoints reserved? When the number of references exceeds 
> the maximum number of checkpoints -1, the Task side is required to upload a 
> new file for this reference. Not sure if this way will ensure that the new 
> file we upload will be correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33588) Fix Flink Checkpointing Statistics Bug

2023-12-14 Thread Tongtong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796841#comment-17796841
 ] 

Tongtong Zhu commented on FLINK-33588:
--

[~jingge] The New PR #23931 [https://github.com/apache/flink/pull/23931 
|https://github.com/apache/flink/pull/23915]

and CI report is no problem 

and I have already compressed the submission

> Fix Flink Checkpointing Statistics Bug
> --
>
> Key: FLINK-33588
> URL: https://issues.apache.org/jira/browse/FLINK-33588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.5, 1.16.0, 1.17.0, 1.15.2, 1.14.6, 1.18.0, 1.17.1
>Reporter: Tongtong Zhu
>Assignee: Tongtong Zhu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
> Attachments: FLINK-33588.patch, image-2023-12-11-17-35-23-391.png, 
> image-2023-12-13-11-35-43-780.png, newCommit-FLINK-33688.patch
>
>
> When the Flink task is first started, the checkpoint data is null due to the 
> lack of data, and Percentile throws a null pointer exception when calculating 
> the percentage. After multiple tests, I found that it is necessary to set an 
> initial value for the statistical data value of the checkpoint when the 
> checkpoint data is null (i.e. at the beginning of the task) to solve this 
> problem.
> The following is an abnormal description of the bug:
> 2023-09-13 15:02:54,608 ERROR 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
>  [] - Unhandled exception.
> org.apache.commons.math3.exception.NullArgumentException: input array
>     at 
> org.apache.commons.math3.util.MathArrays.verifyValues(MathArrays.java:1650) 
> ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.AbstractUnivariateStatistic.test(AbstractUnivariateStatistic.java:158)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:272)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.commons.math3.stat.descriptive.rank.Percentile.evaluate(Percentile.java:241)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics$CommonMetricsSnapshot.getPercentile(DescriptiveStatisticsHistogramStatistics.java:159)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.metrics.DescriptiveStatisticsHistogramStatistics.getQuantile(DescriptiveStatisticsHistogramStatistics.java:53)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.checkpoint.StatsSummarySnapshot.getQuantile(StatsSummarySnapshot.java:108)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.messages.checkpoints.StatsSummaryDto.valueOf(StatsSummaryDto.java:81)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.createCheckpointingStatistics(CheckpointingStatisticsHandler.java:129)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.handleRequest(CheckpointingStatisticsHandler.java:84)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler.handleRequest(CheckpointingStatisticsHandler.java:58)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.AbstractAccessExecutionGraphHandler.handleRequest(AbstractAccessExecutionGraphHandler.java:68)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> org.apache.flink.runtime.rest.handler.job.AbstractExecutionGraphHandler.lambda$handleRequest$0(AbstractExecutionGraphHandler.java:87)
>  ~[flink-dist_2.12-1.14.5.jar:1.14.5]
>     at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
> [?:1.8.0_151]
>     at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_151]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_151]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_151]
>     at 
> 

Re: [PR] [FLINK-30613] Improve resolving schema compatibility -- Milestone one [flink]

2023-12-14 Thread via GitHub


Zakelly commented on PR #21635:
URL: https://github.com/apache/flink/pull/21635#issuecomment-1856242365

   Hi @masteryhx , sorry for the late reply!
   
   After reading your PR, I finally figured out why the default implementation 
of old method and new method should call each other. I think we could do 
something tricky to check whether one of the old and new methods has been 
implemented in specific `TypeSerializerSnapshot` before it is called. 
https://stackoverflow.com/a/2315467 this may provide some ideas. WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33588][Flink-Runtime] Fix NullArgumentException of getQuantile method in DescriptiveStatisticsHistogramStatistics [flink]

2023-12-14 Thread via GitHub


zhutong6688 commented on PR #23931:
URL: https://github.com/apache/flink/pull/23931#issuecomment-1856196745

   old PR is #23749 and #23915. This PR is new, but the content is the same


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Fix NullArgumentException of getQuantile method in DescriptiveStatisticsHistogramStatistics [flink]

2023-12-14 Thread via GitHub


flinkbot commented on PR #23931:
URL: https://github.com/apache/flink/pull/23931#issuecomment-1856171367

   
   ## CI report:
   
   * ee5a7de3a7921a54c8561cb20ab7c56f2e6c7de7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Set version to 1.1.0 and not 1.0.1 [flink-connector-shared-utils]

2023-12-14 Thread via GitHub


echauchot opened a new pull request, #31:
URL: https://github.com/apache/flink-connector-shared-utils/pull/31

   @zentol next version jira label for new connector-parent already exists but 
contains version 1.1.0 and not 1.0.1 (hence this change). Do you agree that the 
yet-to-release changes [1] are worth incrementing the minor version instead of 
the patch version?
   [1]: 
   
https://github.com/apache/flink-connector-shared-utils/commit/813ab45461ef29dd841d63995e97b40125a0d7e4
   
https://github.com/apache/flink-connector-shared-utils/commit/808184ab723d678a4966ae4c45096c4bd6d3e756
   
https://github.com/apache/flink-connector-shared-utils/commit/ac09d18d874691f00629c90726458011034ea106
   
https://github.com/apache/flink-connector-shared-utils/commit/7c2745af777c6681b9eb14f0b05b2136fb141784
   
   CC: @snuyanzin


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] Fix NullArgumentException of getQuantile method in DescriptiveStatisticsHistogramStatistics [flink]

2023-12-14 Thread via GitHub


zhutong6688 opened a new pull request, #23931:
URL: https://github.com/apache/flink/pull/23931

   
   
   ## What is the purpose of the change
   
   Fix NullArgumentException of getQuantile method in 
DescriptiveStatisticsHistogramStatistics
   
   ## Brief change log
   
   This DescriptiveStatisticsHistogramStatistics class has added null 
initialization
   1、DescriptiveStatisticsHistogramStatistics.java
   
   ## Verifying this change
   
   https://github.com/apache/flink/assets/28818582/f498c368-0766-4cb1-adff-bb77abf5cda9;>
   
   During the verification process, the task was submitted to Yarn and run in 
Yarn cluster mode. After multiple stops and restarts, it was found that the 
NullArgumentException exception had disappeared. This fully verified the 
correctness of the modification, improved the stability of program operation, 
and also improved the accuracy of displaying statistical information in the web 
UI interface of the Flink task (as shown in the above figure).
   The highlighted log section in the above figure is the log I printed to 
assist in troubleshooting, which can be ignored.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (no)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33831) Checkout the release branch

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33831.
-
Resolution: Fixed

> Checkout the release branch
> ---
>
> Key: FLINK-33831
> URL: https://issues.apache.org/jira/browse/FLINK-33831
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.18.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33830) Select executing Release Manager

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33830:

Affects Version/s: 1.18.0
   (was: 1.17.0)

> Select executing Release Manager
> 
>
> Key: FLINK-33830
> URL: https://issues.apache.org/jira/browse/FLINK-33830
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Affects Versions: 1.18.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> h4. GPG Key
> You need to have a GPG key to sign the release artifacts. Please be aware of 
> the ASF-wide [release signing 
> guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t 
> have a GPG key associated with your Apache account, please create one 
> according to the guidelines.
> Determine your Apache GPG Key and Key ID, as follows:
> {code:java}
> $ gpg --list-keys
> {code}
> This will list your GPG keys. One of these should reflect your Apache 
> account, for example:
> {code:java}
> --
> pub   2048R/845E6689 2016-02-23
> uid  Nomen Nescio 
> sub   2048R/BA4D50BE 2016-02-23
> {code}
> In the example above, the key ID is the 8-digit hex string in the {{pub}} 
> line: {{{}845E6689{}}}.
> Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache 
> Flink release KEYS 
> file|https://dist.apache.org/repos/dist/release/flink/KEYS] repository at 
> [dist.apache.org|http://dist.apache.org/]. Follow the instructions listed at 
> the top of these files. (Note: Only PMC members have write access to the 
> release repository. If you end up getting 403 errors ask on the mailing list 
> for assistance.)
> Configure {{git}} to use this key when signing code by giving it your key ID, 
> as follows:
> {code:java}
> $ git config --global user.signingkey 845E6689
> {code}
> You may drop the {{--global}} option if you’d prefer to use this key for the 
> current repository only.
> You may wish to start {{gpg-agent}} to unlock your GPG key only once using 
> your passphrase. Otherwise, you may need to enter this passphrase hundreds of 
> times. The setup for {{gpg-agent}} varies based on operating system, but may 
> be something like this:
> {code:bash}
> $ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
> $ export GPG_TTY=$(tty)
> $ export GPG_AGENT_INFO
> {code}
> h4. Access to Apache Nexus repository
> Configure access to the [Apache Nexus 
> repository|https://repository.apache.org/], which enables final deployment of 
> releases to the Maven Central Repository.
>  # You log in with your Apache account.
>  # Confirm you have appropriate access by finding {{org.apache.flink}} under 
> {{{}Staging Profiles{}}}.
>  # Navigate to your {{Profile}} (top right drop-down menu of the page).
>  # Choose {{User Token}} from the dropdown, then click {{{}Access User 
> Token{}}}. Copy a snippet of the Maven XML configuration block.
>  # Insert this snippet twice into your global Maven {{settings.xml}} file, 
> typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
> this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
> {code:xml}
> 
>
>  
>apache.releases.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>  
>apache.snapshots.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>
>  
> {code}
> h4. Website development setup
> Get ready for updating the Flink website by following the [website 
> development 
> instructions|https://flink.apache.org/contributing/improve-website.html].
> h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)
> The default tar application on Mac does not support GNU archive format and 
> defaults to Pax. This bloats the archive with unnecessary metadata that can 
> result in additional files when decompressing (see [1.15.2-RC2 vote 
> thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
> Install gnu-tar and create a symbolic link to use in preference of the 
> default tar program.
> {code:bash}
> $ brew install gnu-tar
> $ ln -s /usr/local/bin/gtar /usr/local/bin/tar
> $ which tar
> {code}
>  
> 
> h3. Expectations
>  * Release Manager’s GPG key is published to 
> [dist.apache.org|http://dist.apache.org/]
>  * Release Manager’s GPG key is configured in git configuration
>  * Release Manager's GPG key is configured as the default gpg key.
>  * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
> Nexus
>  * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33830) Select executing Release Manager

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33830.
-
Fix Version/s: (was: 1.17.0)
   Resolution: Fixed

> Select executing Release Manager
> 
>
> Key: FLINK-33830
> URL: https://issues.apache.org/jira/browse/FLINK-33830
> Project: Flink
>  Issue Type: Sub-task
>  Components: Release System
>Affects Versions: 1.17.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> h4. GPG Key
> You need to have a GPG key to sign the release artifacts. Please be aware of 
> the ASF-wide [release signing 
> guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t 
> have a GPG key associated with your Apache account, please create one 
> according to the guidelines.
> Determine your Apache GPG Key and Key ID, as follows:
> {code:java}
> $ gpg --list-keys
> {code}
> This will list your GPG keys. One of these should reflect your Apache 
> account, for example:
> {code:java}
> --
> pub   2048R/845E6689 2016-02-23
> uid  Nomen Nescio 
> sub   2048R/BA4D50BE 2016-02-23
> {code}
> In the example above, the key ID is the 8-digit hex string in the {{pub}} 
> line: {{{}845E6689{}}}.
> Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache 
> Flink release KEYS 
> file|https://dist.apache.org/repos/dist/release/flink/KEYS] repository at 
> [dist.apache.org|http://dist.apache.org/]. Follow the instructions listed at 
> the top of these files. (Note: Only PMC members have write access to the 
> release repository. If you end up getting 403 errors ask on the mailing list 
> for assistance.)
> Configure {{git}} to use this key when signing code by giving it your key ID, 
> as follows:
> {code:java}
> $ git config --global user.signingkey 845E6689
> {code}
> You may drop the {{--global}} option if you’d prefer to use this key for the 
> current repository only.
> You may wish to start {{gpg-agent}} to unlock your GPG key only once using 
> your passphrase. Otherwise, you may need to enter this passphrase hundreds of 
> times. The setup for {{gpg-agent}} varies based on operating system, but may 
> be something like this:
> {code:bash}
> $ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
> $ export GPG_TTY=$(tty)
> $ export GPG_AGENT_INFO
> {code}
> h4. Access to Apache Nexus repository
> Configure access to the [Apache Nexus 
> repository|https://repository.apache.org/], which enables final deployment of 
> releases to the Maven Central Repository.
>  # You log in with your Apache account.
>  # Confirm you have appropriate access by finding {{org.apache.flink}} under 
> {{{}Staging Profiles{}}}.
>  # Navigate to your {{Profile}} (top right drop-down menu of the page).
>  # Choose {{User Token}} from the dropdown, then click {{{}Access User 
> Token{}}}. Copy a snippet of the Maven XML configuration block.
>  # Insert this snippet twice into your global Maven {{settings.xml}} file, 
> typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
> this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
> {code:xml}
> 
>
>  
>apache.releases.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>  
>apache.snapshots.https
>TOKEN_NAME
>TOKEN_PASSWORD
>  
>
>  
> {code}
> h4. Website development setup
> Get ready for updating the Flink website by following the [website 
> development 
> instructions|https://flink.apache.org/contributing/improve-website.html].
> h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)
> The default tar application on Mac does not support GNU archive format and 
> defaults to Pax. This bloats the archive with unnecessary metadata that can 
> result in additional files when decompressing (see [1.15.2-RC2 vote 
> thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
> Install gnu-tar and create a symbolic link to use in preference of the 
> default tar program.
> {code:bash}
> $ brew install gnu-tar
> $ ln -s /usr/local/bin/gtar /usr/local/bin/tar
> $ which tar
> {code}
>  
> 
> h3. Expectations
>  * Release Manager’s GPG key is published to 
> [dist.apache.org|http://dist.apache.org/]
>  * Release Manager’s GPG key is configured in git configuration
>  * Release Manager's GPG key is configured as the default gpg key.
>  * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
> Nexus
>  * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33829) Review Release Notes in JIRA

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33829:

Description: 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353640
  (was: Example for patch release: 

https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/)

> Review Release Notes in JIRA
> 
>
> Key: FLINK-33829
> URL: https://issues.apache.org/jira/browse/FLINK-33829
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353640



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33827) Review and update documentation

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33827:

Description: (was: There are a few pages in the documentation that need 
to be reviewed and updated for each patch release.
h3. Expectations
 * (minor only) The documentation for the new major release is visible under 
[https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION] 
(after at least one [doc 
build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
 * (minor only) The documentation for the new major release does not contain 
"-SNAPSHOT" in its version title, and all links refer to the corresponding 
version docs instead of {{{}master{}}}.)

> Review and update documentation
> ---
>
> Key: FLINK-33827
> URL: https://issues.apache.org/jira/browse/FLINK-33827
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33827) No need for patch release - Review and update documentation

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33827:

Summary: No need for patch release - Review and update documentation  (was: 
Review and update documentation)

> No need for patch release - Review and update documentation
> ---
>
> Key: FLINK-33827
> URL: https://issues.apache.org/jira/browse/FLINK-33827
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33827) No need for patch release - Review and update documentation

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge closed FLINK-33827.
---
Resolution: Won't Fix

> No need for patch release - Review and update documentation
> ---
>
> Key: FLINK-33827
> URL: https://issues.apache.org/jira/browse/FLINK-33827
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33827) Review and update documentation

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33827:

Fix Version/s: (was: 1.17.0)

> Review and update documentation
> ---
>
> Key: FLINK-33827
> URL: https://issues.apache.org/jira/browse/FLINK-33827
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> There are a few pages in the documentation that need to be reviewed and 
> updated for each patch release.
> h3. Expectations
>  * (minor only) The documentation for the new major release is visible under 
> [https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION]
>  (after at least one [doc 
> build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
>  * (minor only) The documentation for the new major release does not contain 
> "-SNAPSHOT" in its version title, and all links refer to the corresponding 
> version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Set version to 1.1.0 and not 1.0.1 [flink-connector-shared-utils]

2023-12-14 Thread via GitHub


echauchot commented on PR #28:
URL: 
https://github.com/apache/flink-connector-shared-utils/pull/28#issuecomment-1856141477

   I just realized that I submitted the PR against main branch instead of 
parent-pom branch, closing and reopening ...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Set version to 1.1.0 and not 1.0.1 [flink-connector-shared-utils]

2023-12-14 Thread via GitHub


echauchot closed pull request #28: [hotfix] Set version to 1.1.0 and not 1.0.1
URL: https://github.com/apache/flink-connector-shared-utils/pull/28


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33851) Start End of Life discussion thread for now outdated Flink minor version

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33851:

Summary: Start End of Life discussion thread for now outdated Flink minor 
version  (was: CLONE - Start End of Life discussion thread for now outdated 
Flink minor version)

> Start End of Life discussion thread for now outdated Flink minor version
> 
>
> Key: FLINK-33851
> URL: https://issues.apache.org/jira/browse/FLINK-33851
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> The idea is to discuss whether we should do a final release for the now not 
> supported minor version in the community. Such a minor release shouldn't be 
> covered by the current minor version release managers. Their only 
> responsibility is to trigger the discussion.
> The intention of a final patch release for the now unsupported Flink minor 
> version is to flush out all the fixes that didn't end up in the previous 
> release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33850) Updates the docs stable version

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33850:

Summary: Updates the docs stable version  (was: CLONE - Updates the docs 
stable version)

> Updates the docs stable version
> ---
>
> Key: FLINK-33850
> URL: https://issues.apache.org/jira/browse/FLINK-33850
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> Update docs to "stable" in {{docs/config.toml}} in the branch of the 
> _just-released_ version:
>  * Change V{{{}ersion{}}} from {{{}x.y-SNAPSHOT }}to \{{{}x.y.z{}}}, i.e. 
> {{1.6-SNAPSHOT}} to {{1.6.0}}
>  * Change V{{{}ersionTitle{}}} from {{x.y-SNAPSHOT}} to {{{}x.y{}}}, i.e. 
> {{1.6-SNAPSHOT}} to {{1.6}}
>  * Change Branch from {{master}} to {{{}release-x.y{}}}, i.e. {{master}} to 
> {{release-1.6}}
>  * Change {{baseURL}} from 
> {{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
>  to 
> {{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
>  * Change {{javadocs_baseurl}} from 
> {{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
>  to 
> {{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
>  * Change {{IsStable}} to {{true}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] 4 lab exercises squashed [flink-training]

2023-12-14 Thread via GitHub


ness-RemusVitan closed pull request #67: 4 lab exercises squashed
URL: https://github.com/apache/flink-training/pull/67


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33848) Other announcements

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33848:

Summary: Other announcements  (was: CLONE - Other announcements)

> Other announcements
> ---
>
> Key: FLINK-33848
> URL: https://issues.apache.org/jira/browse/FLINK-33848
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> h3. Recordkeeping
> Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] 
> to seed the information about the release into future project reports.
> (Note: Only PMC members have access report releases. If you do not have 
> access, ask on the mailing list for assistance.)
> h3. Flink blog
> Example:
> [https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/]
>  
> h3. Social media
> Tweet, post on Facebook, LinkedIn, and other platforms. Ask other 
> contributors to do the same.
> h3. Flink Release Wiki page
> Add a summary of things that went well or that went not so well during the 
> release process. This can include feedback from contributors but also more 
> generic things like the release have taken longer than initially anticipated 
> (and why) to give a bit of context to the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33848) CLONE - Other announcements

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33848:

Description: 
h3. Recordkeeping

Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] to 
seed the information about the release into future project reports.

(Note: Only PMC members have access report releases. If you do not have access, 
ask on the mailing list for assistance.)
h3. Flink blog

Example:

[https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/]

 
h3. Social media

Tweet, post on Facebook, LinkedIn, and other platforms. Ask other contributors 
to do the same.
h3. Flink Release Wiki page

Add a summary of things that went well or that went not so well during the 
release process. This can include feedback from contributors but also more 
generic things like the release have taken longer than initially anticipated 
(and why) to give a bit of context to the release process.

  was:
h3. Recordkeeping

Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] to 
seed the information about the release into future project reports.

(Note: Only PMC members have access report releases. If you do not have access, 
ask on the mailing list for assistance.)
h3. Flink blog

Example:

[https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/]

 

Social media

Tweet, post on Facebook, LinkedIn, and other platforms. Ask other contributors 
to do the same.
h3. Flink Release Wiki page

Add a summary of things that went well or that went not so well during the 
release process. This can include feedback from contributors but also more 
generic things like the release have taken longer than initially anticipated 
(and why) to give a bit of context to the release process.


> CLONE - Other announcements
> ---
>
> Key: FLINK-33848
> URL: https://issues.apache.org/jira/browse/FLINK-33848
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> h3. Recordkeeping
> Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] 
> to seed the information about the release into future project reports.
> (Note: Only PMC members have access report releases. If you do not have 
> access, ask on the mailing list for assistance.)
> h3. Flink blog
> Example:
> [https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/]
>  
> h3. Social media
> Tweet, post on Facebook, LinkedIn, and other platforms. Ask other 
> contributors to do the same.
> h3. Flink Release Wiki page
> Add a summary of things that went well or that went not so well during the 
> release process. This can include feedback from contributors but also more 
> generic things like the release have taken longer than initially anticipated 
> (and why) to give a bit of context to the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33849) CLONE - Update reference data for Migration Tests

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33849:

Fix Version/s: (was: 1.19.0)
   (was: 1.18.1)

> CLONE - Update reference data for Migration Tests
> -
>
> Key: FLINK-33849
> URL: https://issues.apache.org/jira/browse/FLINK-33849
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Update migration tests in master to cover migration from new version. Since 
> 1.18, this step could be done automatically with the following steps. For 
> more information please refer to [this 
> page.|https://github.com/apache/flink/blob/master/flink-test-utils-parent/flink-migration-test-utils/README.md]
>  # {*}On the published release tag (e.g., release-1.16.0){*}, run 
> {panel}
> {panel}
> |{{$ mvn clean }}{{package}} {{{}-Pgenerate-migration-test-data 
> -Dgenerate.version={}}}{{{}1.16{}}} {{-nsu -Dfast -DskipTests}}|
> The version (1.16 in the command above) should be replaced with the target 
> one.
>  # Modify the content of the file 
> [apache/flink:flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version|https://github.com/apache/flink/blob/master/flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version]
>  to the latest version (it would be "v1_16" if sticking to the example where 
> 1.16.0 was released). 
>  # Commit the modification in step a and b with "{_}[release] Generate 
> reference data for state migration tests based on release-1.xx.0{_}" to the 
> corresponding release branch (e.g. {{release-1.16}} in our example), replace 
> "xx" with the actual version (in this example "16"). You should use the Jira 
> issue ID in case of [release]  as the commit message's prefix if you have a 
> dedicated Jira issue for this task.
>  # Cherry-pick the commit to the master branch. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33849) No need for patch release - Update reference data for Migration Tests

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33849:

Summary: No need for patch release - Update reference data for Migration 
Tests  (was: CLONE - Update reference data for Migration Tests)

> No need for patch release - Update reference data for Migration Tests
> -
>
> Key: FLINK-33849
> URL: https://issues.apache.org/jira/browse/FLINK-33849
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Update migration tests in master to cover migration from new version. Since 
> 1.18, this step could be done automatically with the following steps. For 
> more information please refer to [this 
> page.|https://github.com/apache/flink/blob/master/flink-test-utils-parent/flink-migration-test-utils/README.md]
>  # {*}On the published release tag (e.g., release-1.16.0){*}, run 
> {panel}
> {panel}
> |{{$ mvn clean }}{{package}} {{{}-Pgenerate-migration-test-data 
> -Dgenerate.version={}}}{{{}1.16{}}} {{-nsu -Dfast -DskipTests}}|
> The version (1.16 in the command above) should be replaced with the target 
> one.
>  # Modify the content of the file 
> [apache/flink:flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version|https://github.com/apache/flink/blob/master/flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version]
>  to the latest version (it would be "v1_16" if sticking to the example where 
> 1.16.0 was released). 
>  # Commit the modification in step a and b with "{_}[release] Generate 
> reference data for state migration tests based on release-1.xx.0{_}" to the 
> corresponding release branch (e.g. {{release-1.16}} in our example), replace 
> "xx" with the actual version (in this example "16"). You should use the Jira 
> issue ID in case of [release]  as the commit message's prefix if you have a 
> dedicated Jira issue for this task.
>  # Cherry-pick the commit to the master branch. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33848) CLONE - Other announcements

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33848:

Description: 
h3. Recordkeeping

Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] to 
seed the information about the release into future project reports.

(Note: Only PMC members have access report releases. If you do not have access, 
ask on the mailing list for assistance.)
h3. Flink blog

Example:

[https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/]

 

Social media

Tweet, post on Facebook, LinkedIn, and other platforms. Ask other contributors 
to do the same.
h3. Flink Release Wiki page

Add a summary of things that went well or that went not so well during the 
release process. This can include feedback from contributors but also more 
generic things like the release have taken longer than initially anticipated 
(and why) to give a bit of context to the release process.

  was:
h3. Recordkeeping

Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] to 
seed the information about the release into future project reports.

(Note: Only PMC members have access report releases. If you do not have access, 
ask on the mailing list for assistance.)
h3. Flink blog

Major or otherwise important releases should have a blog post. Write one if 
needed for this particular release. Minor releases that don’t introduce new 
major functionality don’t necessarily need to be blogged (see [flink-web PR 
#581 for Flink 1.15.3|https://github.com/apache/flink-web/pull/581] as an 
example for a minor release blog post).

Please make sure that the release notes of the documentation (see section 
"Review and update documentation") are linked from the blog post of a major 
release.
We usually include the names of all contributors in the announcement blog post. 
Use the following command to get the list of contributors:
{code}
# first line is required to make sort first with uppercase and then lower
export LC_ALL=C
export FLINK_PREVIOUS_RELEASE_BRANCH=
export FLINK_CURRENT_RELEASE_BRANCH=
# e.g.
# export FLINK_PREVIOUS_RELEASE_BRANCH=release-1.17
# export FLINK_CURRENT_RELEASE_BRANCH=release-1.18
git log $(git merge-base master $FLINK_PREVIOUS_RELEASE_BRANCH)..$(git show-ref 
--hash ${FLINK_CURRENT_RELEASE_BRANCH}) --pretty=format:"%an%n%cn" | sort  -u | 
paste -sd, | sed "s/\,/\, /g"
{code}
h3. Social media

Tweet, post on Facebook, LinkedIn, and other platforms. Ask other contributors 
to do the same.
h3. Flink Release Wiki page

Add a summary of things that went well or that went not so well during the 
release process. This can include feedback from contributors but also more 
generic things like the release have taken longer than initially anticipated 
(and why) to give a bit of context to the release process.


> CLONE - Other announcements
> ---
>
> Key: FLINK-33848
> URL: https://issues.apache.org/jira/browse/FLINK-33848
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> h3. Recordkeeping
> Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] 
> to seed the information about the release into future project reports.
> (Note: Only PMC members have access report releases. If you do not have 
> access, ask on the mailing list for assistance.)
> h3. Flink blog
> Example:
> [https://flink.apache.org/2023/11/29/apache-flink-1.17.2-release-announcement/]
>  
> Social media
> Tweet, post on Facebook, LinkedIn, and other platforms. Ask other 
> contributors to do the same.
> h3. Flink Release Wiki page
> Add a summary of things that went well or that went not so well during the 
> release process. This can include feedback from contributors but also more 
> generic things like the release have taken longer than initially anticipated 
> (and why) to give a bit of context to the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33844) Update japicmp configuration

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33844:

Fix Version/s: (was: 1.18.1)

> Update japicmp configuration
> 
>
> Key: FLINK-33844
> URL: https://issues.apache.org/jira/browse/FLINK-33844
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Update the japicmp reference version and wipe exclusions / enable API 
> compatibility checks for {{@PublicEvolving}} APIs on the corresponding 
> SNAPSHOT branch with the {{update_japicmp_configuration.sh}} script (see 
> below).
> For a new major release (x.y.0), run the same command also on the master 
> branch for updating the japicmp reference version and removing out-dated 
> exclusions in the japicmp configuration.
> Make sure that all Maven artifacts are already pushed to Maven Central. 
> Otherwise, there's a risk that CI fails due to missing reference artifacts.
> {code:bash}
> tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
> tools $ cd ..$ git add *$ git commit -m "Update japicmp configuration for 
> $RELEASE_VERSION" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33846) Remove outdated versions

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33846:

Summary: Remove outdated versions  (was: CLONE - Remove outdated versions)

> Remove outdated versions
> 
>
> Key: FLINK-33846
> URL: https://issues.apache.org/jira/browse/FLINK-33846
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> h4. dist.apache.org
> For a new major release remove all release files older than 2 versions, e.g., 
> when releasing 1.7, remove all releases <= 1.5.
> For a new bugfix version remove all release files for previous bugfix 
> releases in the same series, e.g., when releasing 1.7.1, remove the 1.7.0 
> release.
> # If you have not already, check out the Flink section of the {{release}} 
> repository on {{[dist.apache.org|http://dist.apache.org/]}} via Subversion. 
> In a fresh directory:
> {code}
> svn checkout https://dist.apache.org/repos/dist/release/flink 
> --depth=immediates
> cd flink
> {code}
> # Remove files for outdated releases and commit the changes.
> {code}
> svn remove flink-
> svn commit
> {code}
> # Verify that files  are 
> [removed|https://dist.apache.org/repos/dist/release/flink]
> (!) Remember to remove the corresponding download links from the website.
> h4. CI
> Disable the cron job for the now-unsupported version from 
> (tools/azure-pipelines/[build-apache-repo.yml|https://github.com/apache/flink/blob/master/tools/azure-pipelines/build-apache-repo.yml])
>  in the respective branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33844) Update japicmp configuration

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33844:

Fix Version/s: (was: 1.19.0)

> Update japicmp configuration
> 
>
> Key: FLINK-33844
> URL: https://issues.apache.org/jira/browse/FLINK-33844
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> Update the japicmp reference version and wipe exclusions / enable API 
> compatibility checks for {{@PublicEvolving}} APIs on the corresponding 
> SNAPSHOT branch with the {{update_japicmp_configuration.sh}} script (see 
> below).
> For a new major release (x.y.0), run the same command also on the master 
> branch for updating the japicmp reference version and removing out-dated 
> exclusions in the japicmp configuration.
> Make sure that all Maven artifacts are already pushed to Maven Central. 
> Otherwise, there's a risk that CI fails due to missing reference artifacts.
> {code:bash}
> tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
> tools $ cd ..$ git add *$ git commit -m "Update japicmp configuration for 
> $RELEASE_VERSION" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33847) Apache mailing lists announcements

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33847:

Summary: Apache mailing lists announcements  (was: CLONE - Apache mailing 
lists announcements)

> Apache mailing lists announcements
> --
>
> Key: FLINK-33847
> URL: https://issues.apache.org/jira/browse/FLINK-33847
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> Announce on the {{dev@}} mailing list that the release has been finished.
> Announce on the release on the {{user@}} mailing list, listing major 
> improvements and contributions.
> Announce the release on the [annou...@apache.org|mailto:annou...@apache.org] 
> mailing list.
> {panel}
> {panel}
> |{{From: Release Manager}}
> {{To: d...@flink.apache.org, u...@flink.apache.org, user...@flink.apache.org, 
> annou...@apache.org}}
> {{Subject: [ANNOUNCE] Apache Flink 1.2.3 released}}
>  
> {{The Apache Flink community is very happy to announce the release of Apache 
> Flink 1.2.3, which is the third bugfix release for the Apache Flink 1.2 
> series.}}
>  
> {{Apache Flink® is an open-source stream processing framework for 
> distributed, high-performing, always-available, and accurate data streaming 
> applications.}}
>  
> {{The release is available for download at:}}
> {{[https://flink.apache.org/downloads.html]}}
>  
> {{Please check out the release blog post for an overview of the improvements 
> for this bugfix release:}}
> {{}}
>  
> {{The full release notes are available in Jira:}}
> {{}}
>  
> {{We would like to thank all contributors of the Apache Flink community who 
> made this release possible!}}
>  
> {{Feel free to reach out to the release managers (or respond to this thread) 
> with feedback on the release process. Our goal is to constantly improve the 
> release process. Feedback on what could be improved or things that didn't go 
> so well are appreciated.}}
>  
> {{Regards,}}
> {{Release Manager}}|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33845) Merge website pull request

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33845:

Summary: Merge website pull request  (was: CLONE - Merge website pull 
request)

> Merge website pull request
> --
>
> Key: FLINK-33845
> URL: https://issues.apache.org/jira/browse/FLINK-33845
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Priority: Major
>
> Merge the website pull request to [list the 
> release|http://flink.apache.org/downloads.html]. Make sure to regenerate the 
> website as well, as it isn't build automatically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33844) Update japicmp configuration

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33844:

Summary: Update japicmp configuration  (was: CLONE - Update japicmp 
configuration)

> Update japicmp configuration
> 
>
> Key: FLINK-33844
> URL: https://issues.apache.org/jira/browse/FLINK-33844
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
>
> Update the japicmp reference version and wipe exclusions / enable API 
> compatibility checks for {{@PublicEvolving}} APIs on the corresponding 
> SNAPSHOT branch with the {{update_japicmp_configuration.sh}} script (see 
> below).
> For a new major release (x.y.0), run the same command also on the master 
> branch for updating the japicmp reference version and removing out-dated 
> exclusions in the japicmp configuration.
> Make sure that all Maven artifacts are already pushed to Maven Central. 
> Otherwise, there's a risk that CI fails due to missing reference artifacts.
> {code:bash}
> tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
> tools $ cd ..$ git add *$ git commit -m "Update japicmp configuration for 
> $RELEASE_VERSION" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33844) Update japicmp configuration

2023-12-14 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33844:
---

Assignee: Jing Ge  (was: Sergey Nuyanzin)

> Update japicmp configuration
> 
>
> Key: FLINK-33844
> URL: https://issues.apache.org/jira/browse/FLINK-33844
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
>
> Update the japicmp reference version and wipe exclusions / enable API 
> compatibility checks for {{@PublicEvolving}} APIs on the corresponding 
> SNAPSHOT branch with the {{update_japicmp_configuration.sh}} script (see 
> below).
> For a new major release (x.y.0), run the same command also on the master 
> branch for updating the japicmp reference version and removing out-dated 
> exclusions in the japicmp configuration.
> Make sure that all Maven artifacts are already pushed to Maven Central. 
> Otherwise, there's a risk that CI fails due to missing reference artifacts.
> {code:bash}
> tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
> tools $ cd ..$ git add *$ git commit -m "Update japicmp configuration for 
> $RELEASE_VERSION" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >