Re: [PR] [FLINK-33171][table planner] Table SQL support Not Equal for TimePoint type and TimeString [flink]

2023-10-13 Thread via GitHub


fengjiajie commented on code in PR #23478:
URL: https://github.com/apache/flink/pull/23478#discussion_r1359102193


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala:
##
@@ -488,6 +488,18 @@ object ScalarOperatorGens {
 else if (isNumeric(left.resultType) && isNumeric(right.resultType)) {
   generateComparison(ctx, "!=", left, right, resultType)
 }
+// support date/time/timestamp not equalTo string.
+else if (
+  (isTimePoint(left.resultType) && isCharacterString(right.resultType)) ||

Review Comment:
   @lincoln-lil  This is a better solution! Should I include it in the code 
commit or assign the issue to you? I can also help add some corresponding unit 
tests if needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31981) Enabling waiting on the Flink Application to complete before returning to Flink client for batch jobs

2023-10-13 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-31981:

Fix Version/s: (was: 1.16.3)

> Enabling waiting on the Flink Application to complete before returning to 
> Flink client for batch jobs
> -
>
> Key: FLINK-31981
> URL: https://issues.apache.org/jira/browse/FLINK-31981
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / YARN
>Affects Versions: 1.16.1
>Reporter: Allison Chang
>Priority: Major
>
> Currently the Flink Client by default will immediately complete when it hits 
> RUNNING state - we want to make it configurable for batch jobs so that the 
> client only completes when the flink application has fully completed running, 
> rather than just returning upon submission of the job. 
> This allows us to have richer information about whether the underlying 
> application has completed successfully or failed. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33171][table planner] Table SQL support Not Equal for TimePoint type and TimeString [flink]

2023-10-13 Thread via GitHub


fengjiajie commented on code in PR #23478:
URL: https://github.com/apache/flink/pull/23478#discussion_r1359102193


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala:
##
@@ -488,6 +488,18 @@ object ScalarOperatorGens {
 else if (isNumeric(left.resultType) && isNumeric(right.resultType)) {
   generateComparison(ctx, "!=", left, right, resultType)
 }
+// support date/time/timestamp not equalTo string.
+else if (
+  (isTimePoint(left.resultType) && isCharacterString(right.resultType)) ||

Review Comment:
   This is a better solution! Should I include it in the code commit or assign 
the issue to you? I can also help add some corresponding unit tests if needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-13 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1358960432


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/inference/TypeInferenceOperandInference.java:
##
@@ -62,11 +62,31 @@ public TypeInferenceOperandInference(
 this.typeInference = typeInference;
 }
 
+private void validateArgumentCount(CallContext callContext) {
+try {
+TypeInferenceUtil.validateArgumentCount(
+typeInference.getInputTypeStrategy().getArgumentCount(),
+callContext.getArgumentDataTypes().size(),
+true);
+} catch (ValidationException e) {
+final String msg =
+String.format(
+"%s\nExpected signatures are:\n%s",
+e.getMessage(),
+TypeInferenceUtil.generateSignature(
+typeInference,
+callContext.getName(),
+callContext.getFunctionDefinition()));
+throw new ValidationException(msg);
+}
+}
+
 @Override
 public void inferOperandTypes(
 SqlCallBinding callBinding, RelDataType returnType, RelDataType[] 
operandTypes) {
 final CallContext callContext =
 new CallBindingCallContext(dataTypeFactory, definition, 
callBinding, returnType);
+validateArgumentCount(callContext);
 try {
 inferOperandTypesOrError(unwrapTypeFactory(callBinding), 
callContext, operandTypes);
 } catch (ValidationException | CalciteContextException e) {

Review Comment:
   Applied the suggestion



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33255] [table] Validate argument count during type inference [flink]

2023-10-13 Thread via GitHub


bvarghese1 commented on code in PR #23520:
URL: https://github.com/apache/flink/pull/23520#discussion_r1358960352


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/TypeInferenceUtil.java:
##
@@ -385,7 +385,7 @@ private static String formatArgument(Signature.Argument 
arg) {
 return stringBuilder.toString();
 }
 
-private static boolean validateArgumentCount(
+public static boolean validateArgumentCount(

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Exclude http-only available Maven dependencies and include 1.18-SNAPSHOT in PR tests [flink-connector-hbase]

2023-10-13 Thread via GitHub


MartijnVisser merged PR #28:
URL: https://github.com/apache/flink-connector-hbase/pull/28


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33276) Reorganize CI stages

2023-10-13 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-33276:
-

 Summary: Reorganize CI stages
 Key: FLINK-33276
 URL: https://issues.apache.org/jira/browse/FLINK-33276
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System / CI
Affects Versions: 1.17.1, 1.16.2, 1.18.0, 1.19.0
Reporter: Matthias Pohl


{{connect_2}} stage became obsolete due to the externalization of the 
connectors. We can merge {{connect_1}} and {{connect_2}} again into a single 
{{connect}} stage (and maybe rename it into something more meaningful?)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on PR #22985:
URL: https://github.com/apache/flink/pull/22985#issuecomment-1761625553

   @1996fanrui thanks for your review, I'm done addressing all your comments. I 
hope it's the last round of review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1358346133


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java:
##
@@ -252,28 +257,96 @@ public void testTransitionToFinishedOnSuspend() throws 
Exception {
 }
 
 @Test
-public void 
testNotifyNewResourcesAvailableWithCanScaleUpTransitionsToRestarting()
+public void 
testNotifyNewResourcesAvailableBeforeCooldownIsOverScheduledStateChange()
+throws Exception {
+try (MockExecutingContext ctx = new MockExecutingContext()) {
+final Duration scalingIntervalMin =
+Duration.ofSeconds(1L); // do not wait too long in the test
+final ExecutingStateBuilder executingStateBuilder =
+new 
ExecutingStateBuilder().setScalingIntervalMin(scalingIntervalMin);
+Executing exec = executingStateBuilder.build(ctx);
+exec.setLastRescale(Instant.now());
+ctx.setCanScaleUp(true, null); // min met => rescale
+ctx.setExpectRestarting( // scheduled rescale should restart the 
job after cooldown
+restartingArguments -> {
+assertThat(restartingArguments.getBackoffTime(), 
is(Duration.ZERO));
+assertThat(ctx.actionWasScheduled, is(true));
+});
+exec.onNewResourcesAvailable();
+}
+}
+
+@Test
+public void 
testNotifyNewResourcesAvailableAfterCooldownIsOverStateChange() throws 
Exception {
+try (MockExecutingContext ctx = new MockExecutingContext()) {
+final ExecutingStateBuilder executingStateBuilder =
+new 
ExecutingStateBuilder().setScalingIntervalMin(Duration.ofSeconds(20L));
+Executing exec = executingStateBuilder.build(ctx);
+exec.setLastRescale(Instant.now().minus(Duration.ofSeconds(30L)));
+ctx.setCanScaleUp(true, null); // min met => rescale
+ctx.setExpectRestarting(
+restartingArguments -> { // immediate rescale
+assertThat(restartingArguments.getBackoffTime(), 
is(Duration.ZERO));
+assertThat(ctx.actionWasScheduled, is(false));
+});
+exec.onNewResourcesAvailable();
+}
+}
+
+@Test
+public void 
testNotifyNewResourcesAvailableWithMinMetTransitionsToRestarting()

Review Comment:
   done. I agree that keeping high level semantics rather than implementation 
semantics is good but I feel like with this renaming we loose the real use case 
semantics that these tests are assessing: resources too small, resources lost 
etc...
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Rely on org.apache.flink's surefire configuration [flink-connector-hbase]

2023-10-13 Thread via GitHub


MartijnVisser merged PR #29:
URL: https://github.com/apache/flink-connector-hbase/pull/29


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1358336728


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/adaptive/ExecutingTest.java:
##
@@ -570,8 +659,9 @@ public void setHowToHandleFailure(Function function) {
 this.howToHandleFailure = function;
 }
 
-public void setCanScaleUp(Supplier supplier) {
-this.canScaleUp = supplier;
+public void setCanScaleUp(Boolean minIncreaseMet, Boolean 
parallelismChangeAfterTimeout) {
+this.minIncreaseMet = minIncreaseMet;
+this.parallelismChangeAfterTimeout = parallelismChangeAfterTimeout;

Review Comment:
   Agree for the renaming. But regarding the setters I disagree because the 2 
parameters are always set together: it is their combination that gives the 
output of the test. I find it way clearer to set them together. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28074) show statistics details for DESCRIBE EXTENDED

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-28074:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> show statistics details for DESCRIBE EXTENDED
> -
>
> Key: FLINK-28074
> URL: https://issues.apache.org/jira/browse/FLINK-28074
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Currently, DESCRIBE command only show the schema of a given table, EXTENDED 
> does not work. so for EXTENDED mode, the statistics details can also be shown.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23855) Table API & SQL Configuration Not displayed on flink dashboard

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-23855:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Table API & SQL Configuration Not displayed on flink dashboard
> --
>
> Key: FLINK-23855
> URL: https://issues.apache.org/jira/browse/FLINK-23855
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.2
>Reporter: simenliuxing
>Priority: Minor
> Fix For: 1.19.0
>
>
> branch:1.13.2
> planner:blink
> hi
> When I run a flinksql task in standalone mode, I set some parameters starting 
> with table., but I can't find them on the dashboard, although I know these 
> parameters are effective.Can these parameters be displayed somewhere



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29863) Properly handle NaN/Infinity in OpenAPI spec

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-29863:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Properly handle NaN/Infinity in OpenAPI spec
> 
>
> Key: FLINK-29863
> URL: https://issues.apache.org/jira/browse/FLINK-29863
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Runtime / REST
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>
> Our OpenAPI spec maps all float/double fields to float64, but we at times 
> also return NaN/infinity which can't be represented as such since the JSON 
> spec doesn't support it.
> One alternative could be to document it as an either type, returning either a 
> float64 or a string.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33275) CLONE - Vote on the release candidate

2023-10-13 Thread Jing Ge (Jira)
Jing Ge created FLINK-33275:
---

 Summary: CLONE - Vote on the release candidate
 Key: FLINK-33275
 URL: https://issues.apache.org/jira/browse/FLINK-33275
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.18.0
Reporter: Jing Ge


Once you have built and individually reviewed the release candidate, please 
share it for the community-wide review. Please review foundation-wide [voting 
guidelines|http://www.apache.org/foundation/voting.html] for more information.

Start the review-and-vote thread on the dev@ mailing list. Here’s an email 
template; please adjust as you see fit.
{quote}From: Release Manager
To: d...@flink.apache.org
Subject: [VOTE] Release 1.2.3, release candidate #3

Hi everyone,
Please review and vote on the release candidate #3 for the version 1.2.3, as 
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience releases to be 
deployed to dist.apache.org [2], which are signed with the key with fingerprint 
 [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag "release-1.2.3-rc3" [5],
 * website pull request listing the new release and adding announcement blog 
post [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.

Thanks,
Release Manager

[1] link
[2] link
[3] [https://dist.apache.org/repos/dist/release/flink/KEYS]
[4] link
[5] link
[6] link
{quote}
*If there are any issues found in the release candidate, reply on the vote 
thread to cancel the vote.* There’s no need to wait 72 hours. Proceed to the 
Fix Issues step below and address the problem. However, some issues don’t 
require cancellation. For example, if an issue is found in the website pull 
request, just correct it on the spot and the vote can continue as-is.

For cancelling a release, the release manager needs to send an email to the 
release candidate thread, stating that the release candidate is officially 
cancelled. Next, all artifacts created specifically for the RC in the previous 
steps need to be removed:
 * Delete the staging repository in Nexus
 * Remove the source / binary RC files from dist.apache.org
 * Delete the source code tag in git

*If there are no issues, reply on the vote thread to close the voting.* Then, 
tally the votes in a separate email. Here’s an email template; please adjust as 
you see fit.
{quote}From: Release Manager
To: d...@flink.apache.org
Subject: [RESULT] [VOTE] Release 1.2.3, release candidate #3

I'm happy to announce that we have unanimously approved this release.

There are XXX approving votes, XXX of which are binding:
 * approver 1
 * approver 2
 * approver 3
 * approver 4

There are no disapproving votes.

Thanks everyone!
{quote}
 

h3. Expectations
 * Community votes to release the proposed candidate, with at least three 
approving PMC votes

Any issues that are raised till the vote is over should be either resolved or 
moved into the next release (if applicable).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19659) Array type supports equals and not_equals operator when element types are different but castable

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19659:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Array type supports equals and not_equals operator when element types are 
> different but castable
> 
>
> Key: FLINK-19659
> URL: https://issues.apache.org/jira/browse/FLINK-19659
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>
> Currently, Array type supports `equals` and `not_equals` when element types 
> are the same or can not be cased. For example,
> {code:java}
> Array[1] <> Array[1] -> false{code}
> {code:java}
> Array[1] <> Array[cast(1 as bigint)] -> false
> {code}
> But for the element types which are castable, it will throw error,
> {code:java}
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported cast 
> from 'ARRAY NOT NULL' to 'ARRAY NOT 
> NULL'.org.apache.flink.table.planner.codegen.CodeGenException: Unsupported 
> cast from 'ARRAY NOT NULL' to 'ARRAY NOT 
> NULL'. at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateCast(ScalarOperatorGens.scala:1295)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:703)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:498)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:55)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:288){code}
> But the result should be false or true,  for example,
> {code:java}
> /Array[1] <> Array[cast(1 as bigint)] -> false
> {code}
>  
> BTW, Map and MultiSet type are same as this, If it did, I am pleasure to open 
> other issues to track those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32641) json format supports pojo type

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32641:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> json format supports pojo type
> --
>
> Key: FLINK-32641
> URL: https://issues.apache.org/jira/browse/FLINK-32641
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Jacky Lau
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33273) CLONE - Stage source and binary releases on dist.apache.org

2023-10-13 Thread Jing Ge (Jira)
Jing Ge created FLINK-33273:
---

 Summary: CLONE - Stage source and binary releases on 
dist.apache.org
 Key: FLINK-33273
 URL: https://issues.apache.org/jira/browse/FLINK-33273
 Project: Flink
  Issue Type: Sub-task
Reporter: Jing Ge


Copy the source release to the dev repository of dist.apache.org:
# If you have not already, check out the Flink section of the dev repository on 
dist.apache.org via Subversion. In a fresh directory:
{code:bash}
$ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
{code}
# Make a directory for the new release and copy all the artifacts (Flink 
source/binary distributions, hashes, GPG signatures and the python 
subdirectory) into that newly created directory:
{code:bash}
$ mkdir flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
$ mv /tools/releasing/release/* 
flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
{code}
# Add and commit all the files.
{code:bash}
$ cd flink
flink $ svn add flink-${RELEASE_VERSION}-rc${RC_NUM}
flink $ svn commit -m "Add flink-${RELEASE_VERSION}-rc${RC_NUM}"
{code}
# Verify that files are present under 
[https://dist.apache.org/repos/dist/dev/flink|https://dist.apache.org/repos/dist/dev/flink].
# Push the release tag if not done already (the following command assumes to be 
called from within the apache/flink checkout):
{code:bash}
$ git push  refs/tags/release-${RELEASE_VERSION}-rc${RC_NUM}
{code}

 

h3. Expectations
 * Maven artifacts deployed to the staging repository of 
[repository.apache.org|https://repository.apache.org/content/repositories/]
 * Source distribution deployed to the dev repository of 
[dist.apache.org|https://dist.apache.org/repos/dist/dev/flink/]
 * Check hashes (e.g. shasum -c *.sha512)
 * Check signatures (e.g. {{{}gpg --verify 
flink-1.2.3-source-release.tar.gz.asc flink-1.2.3-source-release.tar.gz{}}})
 * {{grep}} for legal headers in each file.
 * If time allows check the NOTICE files of the modules whose dependencies have 
been changed in this release in advance, since the license issues from time to 
time pop up during voting. See [Verifying a Flink 
Release|https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release]
 "Checking License" section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28650) Flink SQL Parsing bug for METADATA

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-28650:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Flink SQL Parsing bug for METADATA
> --
>
> Key: FLINK-28650
> URL: https://issues.apache.org/jira/browse/FLINK-28650
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Jun Qin
>Priority: Major
> Fix For: 1.19.0
>
>
> With the following source/sink tables:
> {code:sql}
> CREATE TABLE sourceTable ( 
> `key` INT, 
> `time` TIMESTAMP(3),
> `value` STRING NOT NULL, 
> id INT 
> ) 
> WITH ( 
> 'connector' = 'datagen', 
> 'rows-per-second'='10', 
> 'fields.id.kind'='sequence', 
> 'fields.id.start'='1', 
> 'fields.id.end'='100' 
> );
> CREATE TABLE sinkTable1 ( 
> `time` TIMESTAMP(3) METADATA FROM 'timestamp', 
> `value` STRING NOT NULL
> ) 
> WITH ( 
>   'connector' = 'kafka',
> ...
> )
> CREATE TABLE sinkTable2 ( 
> `time` TIMESTAMP(3),-- without METADATA
> `value` STRING NOT NULL
> ) 
> WITH ( 
>   'connector' = 'kafka',
> ...
> )
> {code}
> the following three pass the validation:
> {code:sql}
> INSERT INTO sinkTable1
> SELECT 
> `time`, 
> `value`
> FROM sourceTable;
> INSERT INTO sinkTable2
> SELECT 
> `time`, 
> `value`
> FROM sourceTable;
> INSERT INTO sinkTable2 (`time`,`value`)
> SELECT 
> `time`, 
> `value`
> FROM sourceTable;
> {code}
> but this one does not:
> {code:sql}
> INSERT INTO sinkTable1 (`time`,`value`)
> SELECT 
> `time`, 
> `value`
> FROM sourceTable;
> {code}
> It failed with 
> {code:java}
> Unknown target column 'time'
> {code}
> It seems when providing column names in INSERT, the METADATA have an 
> undesired effect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29330) Provide better logs of MiniCluster shutdown procedure

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-29330:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Provide better logs of MiniCluster shutdown procedure
> -
>
> Key: FLINK-29330
> URL: https://issues.apache.org/jira/browse/FLINK-29330
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>
> I recently ran into an issue where the shutdown of a MiniCluster timed out. 
> The logs weren't helpful at all and I had to go in and check every 
> asynchronously component for whether _that_ component was the cause.
> The main issues were that various components don't log anything at all, or 
> that when they did it wasn't clear who owned that component.
> I'd like to add a util that makes it easier for us log the start/stop of a 
> shutdown procedure,
> {code:java}
> public class ShutdownLog {
> /**
>  * Logs the beginning and end of the shutdown procedure for the given 
> component.
>  *
>  * This method accepts a {@link Supplier} instead of a {@link 
> CompletableFuture} because the
>  * latter usually required implies the shutdown to already have begun.
>  *
>  * @param log Logger of owning component
>  * @param component component that will be shut down
>  * @param shutdownTrigger component shutdown trigger
>  * @return termination future of the component
>  */
> public static  CompletableFuture logShutdown(
> Logger log, String component, Supplier> 
> shutdownTrigger) {
> log.debug("Starting shutdown of {}.", component);
> return FutureUtils.logCompletion(log, "shutdown of " + component, 
> shutdownTrigger.get());
> }
> }
> public class FutureUtils {
> public static  CompletableFuture logCompletion(
> Logger log, String action, CompletableFuture future) {
> future.handle(
> (t, throwable) -> {
> if (throwable == null) {
> log.debug("Completed {}.", action);
> } else {
> log.debug("Failed {}.", action, throwable);
> }
> return null;
> });
> return future;
> }
> ...
> {code}
> and extend the AutoCloseableAsync interface for an easy opt-in and customized 
> logging:
> {code:java}
> default CompletableFuture closeAsync(Logger log) {
> return ShutdownLog.logShutdown(log, getClass().getSimpleName(), 
> this::closeAsync);
> }
> {code}
> MiniCluster example usages:
> {code:java}
> -terminationFutures.add(dispatcherResourceManagerComponent.closeAsync())
> +terminationFutures.add(dispatcherResourceManagerComponent.closeAsync(LOG))
> {code}
> {code:java}
> -return ExecutorUtils.nonBlockingShutdown(
> -executorShutdownTimeoutMillis, TimeUnit.MILLISECONDS, ioExecutor);
> +return ShutdownLog.logShutdown(
> +LOG,
> +"ioExecutor",
> +() ->
> +ExecutorUtils.nonBlockingShutdown(
> +executorShutdownTimeoutMillis,
> +TimeUnit.MILLISECONDS,
> +ioExecutor));
> {code}
> [~mapohl] I'm interested what you think about this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24636) Move cluster deletion operation cache into ResourceManager

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24636:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Move cluster deletion operation cache into ResourceManager
> --
>
> Key: FLINK-24636
> URL: https://issues.apache.org/jira/browse/FLINK-24636
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / REST
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-12273) The default value of CheckpointRetentionPolicy should be RETAIN_ON_FAILURE, otherwise it cannot be recovered according to checkpoint after failure.

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-12273:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> The default value of CheckpointRetentionPolicy should be RETAIN_ON_FAILURE, 
> otherwise it cannot be recovered according to checkpoint after failure.
> ---
>
> Key: FLINK-12273
> URL: https://issues.apache.org/jira/browse/FLINK-12273
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0
>Reporter: Mr.Nineteen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The default value of CheckpointRetentionPolicy should be RETAIN_ON_FAILURE, 
> otherwise it cannot be recovered according to checkpoint after failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1358281363


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##
@@ -124,23 +157,74 @@ private void handleDeploymentFailure(ExecutionVertex 
executionVertex, JobExcepti
 
 @Override
 public void onNewResourcesAvailable() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
 }
 
 @Override
 public void onNewResourceRequirements() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
+}
+
+/** Force rescaling as long as the target parallelism is different from 
the current one. */
+private void forceRescale() {
+if (context.shouldRescale(getExecutionGraph(), true)) {
+getLogger()
+.info(
+"Added resources are still there after {} 
time({}), force a rescale.",
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+context.goToRestarting(
+getExecutionGraph(),
+getExecutionGraphHandler(),
+getOperatorCoordinatorHandler(),
+Duration.ofMillis(0L),
+getFailures());
+}
 }
 
+/**
+ * Rescale the job if added resource meets {@link 
JobManagerOptions#MIN_PARALLELISM_INCREASE}.
+ * Otherwise, force a rescale after {@link 
JobManagerOptions#SCHEDULER_SCALING_INTERVAL_MAX} if
+ * the resource is still there.
+ */
 private void maybeRescale() {
-if (context.shouldRescale(getExecutionGraph())) {
-getLogger().info("Can change the parallelism of job. Restarting 
job.");
+rescaleScheduled = false;
+if (context.shouldRescale(
+getExecutionGraph(), false)) { // 
JobManagerOptions#MIN_PARALLELISM_INCREASE met

Review Comment:
   Totally agree, thanks for pointing out 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-19034) Remove deprecated StreamExecutionEnvironment#set/getNumberOfExecutionRetries

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19034:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Remove deprecated StreamExecutionEnvironment#set/getNumberOfExecutionRetries
> 
>
> Key: FLINK-19034
> URL: https://issues.apache.org/jira/browse/FLINK-19034
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Dawid Wysakowicz
>Assignee: Daisy Tsang
>Priority: Major
>  Labels: auto-unassigned, stale-assigned
> Fix For: 1.19.0
>
>
> Remove deprecated 
> {code}
> StreamExecutionEnvironment#setNumberOfExecutionRetries/getNumberOfExecutionRetries
> {code}
> The corresponding settings in {{ExecutionConfig}} will be removed in a 
> separate issue, as they are {{Public}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32523) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout on AZP

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32523:

Fix Version/s: 1.19.0
   (was: 1.18.0)
   (was: 1.17.2)

> NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout 
> on AZP
> ---
>
> Key: FLINK-32523
> URL: https://issues.apache.org/jira/browse/FLINK-32523
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.2, 1.18.0, 1.17.1
>Reporter: Sergey Nuyanzin
>Assignee: Hangxiang Yu
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.19.0
>
> Attachments: failure.log
>
>
> This build
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50795=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8638
>  fails with timeout
> {noformat}
> Jul 03 01:26:35 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 milliseconds
> Jul 03 01:26:35   at java.lang.Object.wait(Native Method)
> Jul 03 01:26:35   at java.lang.Object.wait(Object.java:502)
> Jul 03 01:26:35   at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:198)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:189)
> Jul 03 01:26:35   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 01:26:35   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 01:26:35   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 01:26:35   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 03 01:26:35   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jul 03 01:26:35   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30292) Better support for conversion between DataType and TypeInformation

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-30292:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Better support for conversion between DataType and TypeInformation
> --
>
> Key: FLINK-30292
> URL: https://issues.apache.org/jira/browse/FLINK-30292
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.3
>Reporter: Yunfeng Zhou
>Priority: Major
> Fix For: 1.19.0
>
>
> In Flink 1.15, we have the following ways to convert a DataType to a 
> TypeInformation. Each of them has some disadvantages.
> * `TypeConversions.fromDataTypeToLegacyInfo`
> It might lead to precision losses in face of some data types like timestamp.
> It has been deprecated.
> * `ExternalTypeInfo.of`
> It cannot be used to get detailed type information like `RowTypeInfo`
> It might bring some serialization overhead.
> Given that the ways mentioned above are both not perfect,  Flink SQL should 
> provide a better API to support DataType-TypeInformation conversions, and 
> thus better support Table-DataStream conversions.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19613) Create flink-connector-files-test-utils for formats testing

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19613:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Create flink-connector-files-test-utils for formats testing
> ---
>
> Key: FLINK-19613
> URL: https://issues.apache.org/jira/browse/FLINK-19613
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Tests
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> Since flink-connector-files has some tests with scala dependencies, we cannot 
> create test-jar for it.
> We should create a new module {{flink-connector-files-test-utils}} , it 
> should be a scala free module, formats can rely on it for complete testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21940) Rowtime/proctime should be obtained from getTimestamp instead of getLong

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-21940:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Rowtime/proctime should be obtained from getTimestamp instead of getLong
> 
>
> Key: FLINK-21940
> URL: https://issues.apache.org/jira/browse/FLINK-21940
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26664) Improve Python exception messages to be more Pythonic in PyFlink

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-26664:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Improve Python exception messages to be more Pythonic in PyFlink
> 
>
> Key: FLINK-26664
> URL: https://issues.apache.org/jira/browse/FLINK-26664
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
> Fix For: 1.19.0
>
>
> This is an umbrella ticket for improving PyFlink exceptions. Currently some 
> PyFlink exceptions are confusing which makes it difficult for users to solve 
> the problem according to the exception by themselves.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20190) A New Window Trigger that can trigger window operation both by event time interval、event count for DataStream API

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-20190:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> A New Window Trigger that can trigger window operation both by event time 
> interval、event count for DataStream API
> -
>
> Key: FLINK-20190
> URL: https://issues.apache.org/jira/browse/FLINK-20190
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: GaryGao
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> In production environment, when we are do some window operation, such as 
> window aggregation, using data stream api, developers are always asked to not 
> only trigger the window operation when the watermark pass the max timestamp 
> of window, but also trigger it both by fixed event time interval and fixed 
> count of event.The reason why we want to do this is we are looking forward to 
> get the frequently updated window operation result, other than waiting for a 
> long time until the watermark pass the max timestamp of window.This is very 
> useful in reporting and other BI applications.
> For now the default triggers provided by flink can not close this 
> requirement, so I developed a New Trigger, so called 
> CountAndContinuousEventTimeTrigger, combine ContinuousEventTimeTrigger with 
> CountTrigger to do the above thing.
>  
> To use CountAndContinuousEventTimeTrigger, you should specify two parameters 
> as revealed in it constructor:
> {code:java}
> private CountAndContinuousEventTimeTrigger(Time interval, long 
> maxCount);{code}
>  * Time interval, it means this trigger will continuously fires based on a 
> given time interval, the same as ContinuousEventTimeTrigger.
>  * long maxCount, it means this trigger will fires once the count of elements 
> in a pane reaches the given count, the same as CountTrigger. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29368) Modify DESCRIBE statement docs for new syntax

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-29368:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Modify DESCRIBE statement docs for new syntax
> -
>
> Key: FLINK-29368
> URL: https://issues.apache.org/jira/browse/FLINK-29368
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: Yunhong Zheng
>Priority: Major
> Fix For: 1.19.0
>
>
> In Flink 1.17.0, DESCRIBE statement syntax will be changed to DESCRIBE/DESC 
> [EXTENDED] [catalog_name.][database_name.]table_name 
> [PARTITION(partition_spec)] [col_name]. So, it need to modify the docs for 
> this statement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29001) Migrate datadog reporter to v2 metric submission API

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-29001:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Migrate datadog reporter to v2 metric submission API
> 
>
> Key: FLINK-29001
> URL: https://issues.apache.org/jira/browse/FLINK-29001
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Priority: Minor
> Fix For: 1.19.0
>
>
> The current datadog API that we're using to submit metrics is deprecated.
> https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
> Changes that I found so far:
> * the {{host}} is part of the {{resources}} field
> * metric types are now mapped to numbers (0-3)
> * values must be doubles
> We can optionally look into leveraging the datadog java client; shouldn't be 
> difficult but from a quick experiment I couldn't figure out how to replicate 
> the proxy host/port functionality.
> https://github.com/DataDog/datadog-api-client-java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22334) Fail to translate the hive-sql in STREAMING mode

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-22334:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Fail to translate the hive-sql in STREAMING mode
> 
>
> Key: FLINK-22334
> URL: https://issues.apache.org/jira/browse/FLINK-22334
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> Please run in the streaming mode.
> The failed statement 
> {code:java}
> // Some comments here
> insert into dest(y,x) select x,y from foo cluster by x
> {code}
> Exception stack:
> {code:java}
> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not 
> enough rules to produce a node with desired properties: convention=LOGICAL, 
> FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, 
> ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE].
> Missing conversion is LogicalDistribution[convention: NONE -> LOGICAL]
> There is 1 empty subset: rel#5176:RelSubset#43.LOGICAL.any.None: 
> 0.[NONE].[NONE], the relevant part of the original plan is as follows
> 5174:LogicalDistribution(collation=[[0 ASC-nulls-first]], dist=[[]])
>   5172:LogicalProject(subset=[rel#5173:RelSubset#42.NONE.any.None: 
> 0.[NONE].[NONE]], x=[$0])
> 5106:LogicalTableScan(subset=[rel#5171:RelSubset#41.NONE.any.None: 
> 0.[NONE].[NONE]], table=[[myhive, default, foo]])
> Root: rel#5176:RelSubset#43.LOGICAL.any.None: 0.[NONE].[NONE]
> Original rel:
> FlinkLogicalLegacySink(subset=[rel#4254:RelSubset#8.LOGICAL.any.None: 
> 0.[NONE].[NONE]], name=[collect], fields=[_o__c0]): rowcount = 1.0E8, 
> cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, 
> id = 4276
>   FlinkLogicalCalc(subset=[rel#4275:RelSubset#7.LOGICAL.any.None: 
> 0.[NONE].[NONE]], select=[CASE(IS NULL($f1), 0:BIGINT, $f1) AS _o__c0]): 
> rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 
> network, 0.0 memory}, id = 4288
> FlinkLogicalJoin(subset=[rel#4272:RelSubset#6.LOGICAL.any.None: 
> 0.[NONE].[NONE]], condition=[=($0, $1)], joinType=[left]): rowcount = 1.0E8, 
> cumulative cost = {1.0E8 rows, 1.0856463237676364E8 cpu, 4.0856463237676364E8 
> io, 0.0 network, 0.0 memory}, id = 4271
>   
> FlinkLogicalTableSourceScan(subset=[rel#4270:RelSubset#1.LOGICAL.any.None: 
> 0.[NONE].[NONE]], table=[[myhive, default, bar, project=[i]]], fields=[i]): 
> rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 4.0E8 io, 0.0 
> network, 0.0 memory}, id = 4279
>   FlinkLogicalAggregate(subset=[rel#4268:RelSubset#5.LOGICAL.any.None: 
> 0.[NONE].[NONE]], group=[{1}], agg#0=[COUNT($0)]): rowcount = 
> 8564632.376763644, cumulative cost = {9.0E7 rows, 1.89E8 cpu, 7.2E8 io, 0.0 
> network, 0.0 memory}, id = 4286
> FlinkLogicalCalc(subset=[rel#4283:RelSubset#3.LOGICAL.any.None: 
> 0.[NONE].[NONE]], select=[x, y], where=[IS NOT NULL(y)]): rowcount = 9.0E7, 
> cumulative cost = {9.0E7 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id 
> = 4282
>   
> FlinkLogicalTableSourceScan(subset=[rel#4262:RelSubset#2.LOGICAL.any.None: 
> 0.[NONE].[NONE]], table=[[myhive, default, foo]], fields=[x, y]): rowcount = 
> 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 
> memory}, id = 4261
> Sets:
> Set#41, type: RecordType(INTEGER x, INTEGER y)
>   rel#5171:RelSubset#41.NONE.any.None: 0.[NONE].[NONE], best=null
>   rel#5106:LogicalTableScan.NONE.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo]), rowcount=1.0E8, cumulative 
> cost={inf}
>   rel#5179:RelSubset#41.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#5178
>   rel#5178:FlinkLogicalTableSourceScan.LOGICAL.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo],fields=x, y), rowcount=1.0E8, 
> cumulative cost={1.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 memory}
> Set#42, type: RecordType(INTEGER x)
>   rel#5173:RelSubset#42.NONE.any.None: 0.[NONE].[NONE], best=null
>   rel#5172:LogicalProject.NONE.any.None: 
> 0.[NONE].[NONE](input=RelSubset#5171,inputs=0), rowcount=1.0E8, cumulative 
> cost={inf}
>   rel#5180:LogicalTableScan.NONE.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo, project=[x]]), rowcount=1.0E8, 
> cumulative cost={inf}
>   rel#5182:LogicalCalc.NONE.any.None: 
> 0.[NONE].[NONE](input=RelSubset#5171,expr#0..1={inputs},0=$t0), 
> rowcount=1.0E8, cumulative cost={inf}
>   rel#5184:RelSubset#42.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#5183
>   rel#5183:FlinkLogicalTableSourceScan.LOGICAL.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo, 

Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1358306727


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##
@@ -124,23 +157,74 @@ private void handleDeploymentFailure(ExecutionVertex 
executionVertex, JobExcepti
 
 @Override
 public void onNewResourcesAvailable() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
 }
 
 @Override
 public void onNewResourceRequirements() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
+}
+
+/** Force rescaling as long as the target parallelism is different from 
the current one. */
+private void forceRescale() {
+if (context.shouldRescale(getExecutionGraph(), true)) {
+getLogger()
+.info(
+"Added resources are still there after {} 
time({}), force a rescale.",
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+context.goToRestarting(
+getExecutionGraph(),
+getExecutionGraphHandler(),
+getOperatorCoordinatorHandler(),
+Duration.ofMillis(0L),
+getFailures());
+}
 }
 
+/**
+ * Rescale the job if added resource meets {@link 
JobManagerOptions#MIN_PARALLELISM_INCREASE}.
+ * Otherwise, force a rescale after {@link 
JobManagerOptions#SCHEDULER_SCALING_INTERVAL_MAX} if
+ * the resource is still there.
+ */
 private void maybeRescale() {
-if (context.shouldRescale(getExecutionGraph())) {
-getLogger().info("Can change the parallelism of job. Restarting 
job.");
+rescaleScheduled = false;
+if (context.shouldRescale(
+getExecutionGraph(), false)) { // 
JobManagerOptions#MIN_PARALLELISM_INCREASE met
+getLogger().info("Can change the parallelism of the job. 
Restarting the job.");
 context.goToRestarting(
 getExecutionGraph(),
 getExecutionGraphHandler(),
 getOperatorCoordinatorHandler(),
 Duration.ofMillis(0L),
 getFailures());
+} else if (scalingIntervalMax
+!= null) { // JobManagerOptions#MIN_PARALLELISM_INCREASE not 
met

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-21022) flink-connector-es add onSuccess handler after bulk process for sync success data to other third party system for data consistency checking

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-21022:

Fix Version/s: 1.19.0
   (was: 1.11.7)
   (was: 1.18.0)

> flink-connector-es add onSuccess handler after bulk process for sync success 
> data to other third party system for data consistency checking
> ---
>
> Key: FLINK-21022
> URL: https://issues.apache.org/jira/browse/FLINK-21022
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Zheng WEI
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.19.0
>
>
> flink-connector-es add onSuccess handler after successful bulk process, in 
> order to sync success data to other third party system for data consistency 
> checking. Default the implementation of onSuccess function is empty logic, 
> user can set its own onSuccess handler when needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-14148) Investigate pushing predicate/projection to underlying Hive input format

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-14148:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Investigate pushing predicate/projection to underlying Hive input format
> 
>
> Key: FLINK-14148
> URL: https://issues.apache.org/jira/browse/FLINK-14148
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23942) Python documentation redirects to shared pages should activate python tabs

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-23942:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Python documentation redirects to shared pages should activate python tabs
> --
>
> Key: FLINK-23942
> URL: https://issues.apache.org/jira/browse/FLINK-23942
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Documentation
>Reporter: Chesnay Schepler
>Priority: Minor
> Fix For: 1.19.0
>
>
> The Python Documentation contains a few items that should just link to the 
> plain DataStream documentation which contain tabs.
> Putting aside that the experience of switching between these places is quite 
> a jarring one, we should make sure that users following such a redirection 
> should immediately see the Python version of code samples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20518) WebUI should escape characters in metric names

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-20518:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> WebUI should escape characters in metric names
> --
>
> Key: FLINK-20518
> URL: https://issues.apache.org/jira/browse/FLINK-20518
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Chesnay Schepler
>Assignee: tartarus
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-assigned
> Fix For: 1.19.0
>
>
> Metric names can contain characters like {{+}} or {{?}} that should be 
> escaped when querying metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29370) Protobuf in flink-sql-protobuf is not shaded

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-29370:

Fix Version/s: 1.19.0
   (was: 1.18.0)
   (was: 1.16.3)

> Protobuf in flink-sql-protobuf is not shaded
> 
>
> Key: FLINK-29370
> URL: https://issues.apache.org/jira/browse/FLINK-29370
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.19.0
>
>
> The protobuf classes in flink-sql-protobuf is not shaded which may lead to 
> class conflicts. Usually, sql jars should shade common used dependencies, 
> e.g. flink-sql-avro: 
> https://github.com/apache/flink/blob/master/flink-formats/flink-sql-avro/pom.xml#L88
>  
> {code}
> ➜  Downloads jar tvf flink-sql-protobuf-1.16.0.jar | grep com.google
>  0 Tue Sep 13 20:23:44 CST 2022 com/google/
>  0 Tue Sep 13 20:23:44 CST 2022 com/google/protobuf/
>568 Tue Sep 13 20:23:44 CST 2022 
> com/google/protobuf/ProtobufInternalUtils.class
>  19218 Tue Sep 13 20:23:44 CST 2022 
> com/google/protobuf/AbstractMessage$Builder.class
>259 Tue Sep 13 20:23:44 CST 2022 
> com/google/protobuf/AbstractMessage$BuilderParent.class
>  10167 Tue Sep 13 20:23:44 CST 2022 com/google/protobuf/AbstractMessage.class
>   1486 Tue Sep 13 20:23:44 CST 2022 
> com/google/protobuf/AbstractMessageLite$Builder$LimitedInputStream.class
>  12399 Tue Sep 13 20:23:44 CST 2022 
> com/google/protobuf/AbstractMessageLite$Builder.class
>279 Tue Sep 13 20:23:44 CST 2022 
> com/google/protobuf/AbstractMessageLite$InternalOneOfEnu
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19819) SourceReaderBase supports limit push down

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19819:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> SourceReaderBase supports limit push down
> -
>
> Key: FLINK-19819
> URL: https://issues.apache.org/jira/browse/FLINK-19819
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> User requirement:
> Users need to look at a few random pieces of data in a table to see what the 
> data looks like. So users often use the SQL:
> "select * from table limit 10"
> For a large table, expect to end soon because only a few pieces of data are 
> queried.
> For DataStream or BoundedStream, they are push based execution models, so the 
> downstream cannot control the end of source operator.
> We need push down limit to source operator, so that source operator can end 
> early.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-14122) Extend State Processor API to read ListCheckpointed operator state

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-14122:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Extend State Processor API to read ListCheckpointed operator state
> --
>
> Key: FLINK-14122
> URL: https://issues.apache.org/jira/browse/FLINK-14122
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.10.0
>Reporter: Seth Wiesman
>Priority: Major
>  Labels: usability
> Fix For: 1.19.0
>
>
> The state processor api cannot  read operator state using the 
> ListCheckpointed interface because it requires access the JavaSerializer 
> which is package private. Instead of making that class public, we should 
> offer a readListCheckpointed Method to easily read this state.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25244) Deprecate Java 8 support

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-25244:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Deprecate Java 8 support
> 
>
> Key: FLINK-25244
> URL: https://issues.apache.org/jira/browse/FLINK-25244
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30166) Refactor tests that use the deprecated StreamingFileSink instead of FileSink

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-30166:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Refactor tests that use the deprecated StreamingFileSink instead of FileSink
> 
>
> Key: FLINK-30166
> URL: https://issues.apache.org/jira/browse/FLINK-30166
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30379) Decouple connector docs integration from connector releases

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-30379:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Decouple connector docs integration from connector releases
> ---
>
> Key: FLINK-30379
> URL: https://issues.apache.org/jira/browse/FLINK-30379
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common, Documentation
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>
> The Flink docs currently integrate the docs of a specific connector release.
> This implies that we need to update this integration for every single 
> connector release, and can't hotfix the connector docs as we do for Flink 
> releases.
> We should consider having tag in the connector repos for minor Flink versions 
> (e.g, Flink-1.16) such that Flink doesn't have to select a specific connector 
> version but just uses "whatever works with 1.16".
> This does imply that we'd change the tag now and then, which isn't perfect 
> but imo worth it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27809) Clarify that cluster-id is mandatory for Kubernetes HA in standalone mode

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-27809:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Clarify that cluster-id is mandatory for Kubernetes HA in standalone mode
> -
>
> Key: FLINK-27809
> URL: https://issues.apache.org/jira/browse/FLINK-27809
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Runtime / Configuration
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>
> The description for KubernetesConfigOptions#CLUSTER_ID states that the client 
> generates this automatically if it isn't set. This is technically correct, 
> because the client is not involved in the deployment for standalone clusters, 
> but to users this sounds like it is optional in general, while it must be set 
> (to an ideally unique value) in standalone mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24757) Yarn application is not terminated after the job finishes when submitting a yarn-per-job insert job with SQL client

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24757:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Yarn application is not terminated after the job finishes when submitting a 
> yarn-per-job insert job with SQL client
> ---
>
> Key: FLINK-24757
> URL: https://issues.apache.org/jira/browse/FLINK-24757
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.14.0
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.19.0
>
>
> I've seen this problem for about three times in the user mailing thread 
> (previously I suspect that the users are specifying the wrong 
> {{{}execution.target{}}}) until I myself also bumped into this problem. I've 
> submitted a yarn-per-job batch insert SQL with Flink SQL client and after the 
> job finishes Yarn application is not terminated.
> This is because yarn job cluster is using {{MiniDispatcher}} and it will 
> directly terminate only in detached execution mode. This execution mode is 
> (through some function calls) related to {{DeploymentOptions#ATTACHED}} which 
> is true by default if jobs are submitted from SQL client.
> When submitting an insert job, SQL client will not wait for the job to 
> finish. Instead it only reports the job id. So I think it is reasonable to 
> set detached mode for every insert job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33271) CLONE - Build Release Candidate: 1.18.0-rc2

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33271:

Description: 
The core of the release process is the build-vote-fix cycle. Each cycle 
produces one release candidate. The Release Manager repeats this cycle until 
the community approves one release candidate, which is then finalized.
h4. Prerequisites

Set up a few environment variables to simplify Maven commands that follow. This 
identifies the release candidate being built. Start with {{RC_NUM}} equal to 1 
and increment it for each candidate:
{code:java}
RC_NUM="2"
TAG="release-${RELEASE_VERSION}-rc${RC_NUM}"
{code}

  was:
The core of the release process is the build-vote-fix cycle. Each cycle 
produces one release candidate. The Release Manager repeats this cycle until 
the community approves one release candidate, which is then finalized.

h4. Prerequisites
Set up a few environment variables to simplify Maven commands that follow. This 
identifies the release candidate being built. Start with {{RC_NUM}} equal to 1 
and increment it for each candidate:
{code}
RC_NUM="1"
TAG="release-${RELEASE_VERSION}-rc${RC_NUM}"
{code}


> CLONE - Build Release Candidate: 1.18.0-rc2
> ---
>
> Key: FLINK-33271
> URL: https://issues.apache.org/jira/browse/FLINK-33271
> Project: Flink
>  Issue Type: New Feature
>Affects Versions: 1.18.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> The core of the release process is the build-vote-fix cycle. Each cycle 
> produces one release candidate. The Release Manager repeats this cycle until 
> the community approves one release candidate, which is then finalized.
> h4. Prerequisites
> Set up a few environment variables to simplify Maven commands that follow. 
> This identifies the release candidate being built. Start with {{RC_NUM}} 
> equal to 1 and increment it for each candidate:
> {code:java}
> RC_NUM="2"
> TAG="release-${RELEASE_VERSION}-rc${RC_NUM}"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-12590) Replace http links in documentation

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-12590:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Replace http links in documentation
> ---
>
> Key: FLINK-12590
> URL: https://issues.apache.org/jira/browse/FLINK-12590
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.7.2, 1.8.0, 1.9.0
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.19.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1358317366


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##
@@ -67,13 +77,36 @@ class Executing extends StateWithExecutionGraph implements 
ResourceListener {
 this.context = context;
 Preconditions.checkState(
 executionGraph.getState() == JobStatus.RUNNING, "Assuming 
running execution graph");
+this.scalingIntervalMin = scalingIntervalMin;
+this.scalingIntervalMax = scalingIntervalMax;
+this.lastRescale =
+Instant.now(); // Executing is recreated with each restart 
(when we rescale)
+// we consider the first execution of the pipeline as a rescale event
+Preconditions.checkState(
+!scalingIntervalMin.isNegative(),
+"%s must be positive integer or 0",
+JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MIN.key());
+if (scalingIntervalMax != null) {
+Preconditions.checkState(
+scalingIntervalMax.compareTo(scalingIntervalMin) > 0,
+"%s(%d) must be greater than %s(%d)",
+JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax,
+JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MIN.key(),
+scalingIntervalMin);
+}
 
 deploy();
 
 // check if new resources have come available in the meantime
 context.runIfState(this, this::maybeRescale, Duration.ZERO);
 }
 
+@VisibleForTesting
+void setLastRescale(Instant lastRescale) {

Review Comment:
   No because `lastRescale` is never passed from the outside of the Executing 
object in production code: it is only initialized at the creation of Executing. 
`setLastRescale` is only used in test code. Changing a production constructor 
just for a test case is a bad design.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28929) Add built-in datediff function.

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-28929:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Add built-in datediff function.
> ---
>
> Key: FLINK-28929
> URL: https://issues.apache.org/jira/browse/FLINK-28929
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.17.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> Syntax:
> {code:java}
> DATEDIFF(expr1,expr2){code}
> Returns:
> returns _{{expr1}}_ − _{{expr2}}_ expressed as a value in days from one date 
> to the other. _{{expr1}}_ and _{{expr2}}_ are date or date-and-time 
> expressions. Only the date parts of the values are used in the calculation.
> This function returns {{NULL}} if _{{expr1}}_ or _{{expr2}}_ is {{{}NULL{}}}.
> Examples:
> {code:java}
> > SELECT DATEDIFF('2007-12-31 23:59:59','2007-12-30');
> -> 1
> > SELECT DATEDIFF('2010-11-30 23:59:59','2010-12-31');
> -> -31{code}
> See more:
>  * mysql: 
> [https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_datediff]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32921) Prepare Flink 1.18 Release

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-32921.
-
Resolution: Fixed

> Prepare Flink 1.18 Release
> --
>
> Key: FLINK-32921
> URL: https://issues.apache.org/jira/browse/FLINK-32921
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Major
>
> This umbrella issue is meant as a test balloon for moving the [release 
> documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
>  into Jira.
> h3. Prerequisites
> h4. Environment Variables
> Commands in the subtasks might expect some of the following enviroment 
> variables to be set accordingly to the version that is about to be released:
> {code:bash}
> RELEASE_VERSION="1.5.0"
> SHORT_RELEASE_VERSION="1.5"
> CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
> NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
> SHORT_NEXT_SNAPSHOT_VERSION="1.6"
> {code}
> h4. Build Tools
> All of the following steps require to use Maven 3.8.6 and Java 8. Modify your 
> PATH environment variable accordingly if needed.
> h4. Flink Source
>  * Create a new directory for this release and clone the Flink repository 
> from Github to ensure you have a clean workspace (this step is optional).
>  * Run {{mvn -Prelease clean install}} to ensure that the build processes 
> that are specific to that profile are in good shape (this step is optional).
> The rest of this instructions assumes that commands are run in the root (or 
> {{./tools}} directory) of a repository on the branch of the release version 
> with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24192) Sql get plan failed. All the inputs have relevant nodes, however the cost is still infinite

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24192:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Sql get plan failed. All the inputs have relevant nodes, however the cost is 
> still infinite
> ---
>
> Key: FLINK-24192
> URL: https://issues.apache.org/jira/browse/FLINK-24192
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.19.0
>
>
> *sql*
> {code:java}
> CREATE TABLE database5_t0(
> `c0` FLOAT , `c1` FLOAT , `c2` CHAR
> ) WITH (
>  'connector' = 'filesystem',
>  'format' = 'testcsv',
>  'path' = '$resultPath00'
> )
> CREATE TABLE database5_t1(
> `c0` TINYINT , `c1` INTEGER
> ) WITH (
>  'connector' = 'filesystem',
>  'format' = 'testcsv',
>  'path' = '$resultPath11'
> )
> CREATE TABLE database5_t2 (
>   `c0` FLOAT
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> CREATE TABLE database5_t3 (
>   `c0` STRING , `c1` STRING
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> INSERT INTO database5_t0(c0, c1, c2) VALUES(cast(0.84355265 as FLOAT), 
> cast(0.3269016 as FLOAT), cast('' as CHAR))
> INSERT INTO database5_t1(c0, c1) VALUES(cast(-125 as TINYINT), -1715936454)
> INSERT INTO database5_t2(c0) VALUES(cast(-1.7159365 as FLOAT))
> INSERT INTO database5_t3(c0, c1) VALUES('16:36:29', '1969-12-12')
> INSERT INTO MySink
> SELECT COUNT(ref0) from (SELECT COUNT(1) AS ref0 FROM database5_t0, 
> database5_t3, database5_t1, database5_t2 WHERE CAST ( EXISTS (SELECT 1) AS 
> BOOLEAN)
> UNION ALL
> SELECT COUNT(1) AS ref0 FROM database5_t0, database5_t3, database5_t1, 
> database5_t2
> WHERE CAST ((NOT CAST (( EXISTS (SELECT 1)) AS BOOLEAN)) AS BOOLEAN)
> UNION ALL
> SELECT COUNT(1) AS ref0 FROM database5_t0, database5_t3, database5_t1, 
> database5_t2 WHERE CAST ((CAST ( EXISTS (SELECT 1) AS BOOLEAN)) IS NULL AS 
> BOOLEAN)) as table1
> {code}
> After excite the sql in it case, we get the error like this:
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> FlinkLogicalSink(table=[default_catalog.default_database.MySink], fields=[a])
> +- FlinkLogicalCalc(select=[CAST(EXPR$0) AS a])
>+- FlinkLogicalAggregate(group=[{}], EXPR$0=[COUNT()])
>   +- FlinkLogicalUnion(all=[true])
>  :- FlinkLogicalUnion(all=[true])
>  :  :- FlinkLogicalCalc(select=[0 AS $f0])
>  :  :  +- FlinkLogicalAggregate(group=[{}], ref0=[COUNT()])
>  :  : +- FlinkLogicalJoin(condition=[$1], joinType=[semi])
>  :  ::- FlinkLogicalCalc(select=[c0])
>  :  ::  +- FlinkLogicalJoin(condition=[true], 
> joinType=[inner])
>  :  :: :- FlinkLogicalCalc(select=[c0])
>  :  :: :  +- FlinkLogicalJoin(condition=[true], 
> joinType=[inner])
>  :  :: : :- FlinkLogicalCalc(select=[c0])
>  :  :: : :  +- FlinkLogicalJoin(condition=[true], 
> joinType=[inner])
>  :  :: : : :- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> database5_t0, project=[c0]]], fields=[c0])
>  :  :: : : +- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> database5_t3, project=[c0]]], fields=[c0])
>  :  :: : +- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> database5_t1, project=[c0]]], fields=[c0])
>  :  :: +- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> database5_t2]], fields=[c0])
>  :  :+- FlinkLogicalCalc(select=[IS NOT NULL(m) AS $f0])
>  :  :   +- FlinkLogicalAggregate(group=[{}], m=[MIN($0)])
>  :  :  +- FlinkLogicalCalc(select=[true AS i])
>  :  : +- FlinkLogicalValues(tuples=[[{ 0 }]])
>  :  +- FlinkLogicalCalc(select=[0 AS $f0])
>  : +- FlinkLogicalAggregate(group=[{}], ref0=[COUNT()])
>  :+- FlinkLogicalJoin(condition=[$1], joinType=[anti])
>  :   :- FlinkLogicalCalc(select=[c0])
>  :   :  +- FlinkLogicalJoin(condition=[true], 
> joinType=[inner])
>  :   : :- FlinkLogicalCalc(select=[c0])
>  :   : :  +- FlinkLogicalJoin(condition=[true], 
> joinType=[inner])
>  :   : : :- FlinkLogicalCalc(select=[c0])
>  :   : : :  +- FlinkLogicalJoin(condition=[true], 
> 

[jira] [Updated] (FLINK-15860) Store temporary functions as CatalogFunctions in FunctionCatalog

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-15860:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Store temporary functions as CatalogFunctions in FunctionCatalog
> 
>
> Key: FLINK-15860
> URL: https://issues.apache.org/jira/browse/FLINK-15860
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> We should change the {{FunctionCatalog}} so that it stores temporary 
> functions as {{CatalogFunction}}s instead of instances of 
> {{FunctionDefinition}} the same way we store {{CatalogTable}}s for temporary 
> tables.
> For functions that were registered with their instance we should create a 
> {{CatalogFunction}} wrapper similar to {{ConnectorCatalogTable}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24343) Revisit Scheduler and Coordinator Startup Procedure

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24343:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Revisit Scheduler and Coordinator Startup Procedure
> ---
>
> Key: FLINK-24343
> URL: https://issues.apache.org/jira/browse/FLINK-24343
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Stephan Ewen
>Priority: Major
> Fix For: 1.19.0
>
>
> We need to re-examine the startup procedure of the scheduler, and how it 
> interacts with the startup of the operator coordinators.
> We need to make sure the following conditions are met:
>   - The Operator Coordinators are started before the first action happens 
> that they need to be informed of. That includes as task being ready, a 
> checkpoint happening, etc.
>   - The scheduler must be started to the point that it can handle 
> "failGlobal()" calls, because the coordinators might trigger that during 
> their startup when an exception in "start()" occurs.
> /cc [~chesnay]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33171][table planner] Table SQL support Not Equal for TimePoint type and TimeString [flink]

2023-10-13 Thread via GitHub


lincoln-lil commented on code in PR #23478:
URL: https://github.com/apache/flink/pull/23478#discussion_r1358298316


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala:
##
@@ -488,6 +488,18 @@ object ScalarOperatorGens {
 else if (isNumeric(left.resultType) && isNumeric(right.resultType)) {
   generateComparison(ctx, "!=", left, right, resultType)
 }
+// support date/time/timestamp not equalTo string.
+else if (
+  (isTimePoint(left.resultType) && isCharacterString(right.resultType)) ||

Review Comment:
   In order not to lose the simplicity of the operators and the null handling, 
I've roughly modified a version that can be found below
   
   ```scala
 
 private def wrapExpressionIfNonEq(
 isNonEq: Boolean,
 equalsExpr: GeneratedExpression,
 resultType: LogicalType): GeneratedExpression = {
   if (isNonEq) {
 equalsExpr
   } else {
 GeneratedExpression(
   s"(!${equalsExpr.resultTerm})",
   equalsExpr.nullTerm,
   equalsExpr.code,
   resultType)
   }
 }
   
 private def generateEqualAndNonEqual(
 ctx: CodeGeneratorContext,
 left: GeneratedExpression,
 right: GeneratedExpression,
 operator: String,
 resultType: LogicalType): GeneratedExpression = {
   
   checkImplicitConversionValidity(left, right)
   
   val nonEq = operator match {
 case "==" => false
 case "!=" => true
 case _ => throw new CodeGenException(s"Unsupported boolean comparison 
'$operator'.")
   }
   val canEqual = isInteroperable(left.resultType, right.resultType)
   
   if (isCharacterString(left.resultType) && 
isCharacterString(right.resultType)) {
 generateOperatorIfNotNull(ctx, resultType, left, right)(
   (leftTerm, rightTerm) => s"${if (nonEq) "!" else 
""}$leftTerm.equals($rightTerm)")
   }
   // numeric types
   else if (isNumeric(left.resultType) && isNumeric(right.resultType)) {
 generateComparison(ctx, operator, left, right, resultType)
   }
   // array types
   else if (isArray(left.resultType) && canEqual) {
 wrapExpressionIfNonEq(
   nonEq,
   generateArrayComparison(ctx, left, right, resultType),
   resultType)
   }
   // map types
   else if (isMap(left.resultType) && canEqual) {
 val mapType = left.resultType.asInstanceOf[MapType]
 wrapExpressionIfNonEq(
   nonEq,
   generateMapComparison(
 ctx,
 left,
 right,
 mapType.getKeyType,
 mapType.getValueType,
 resultType),
   resultType)
   }
   // multiset types
   else if (isMultiset(left.resultType) && canEqual) {
 val multisetType = left.resultType.asInstanceOf[MultisetType]
 wrapExpressionIfNonEq(
   nonEq,
   generateMapComparison(
 ctx,
 left,
 right,
 multisetType.getElementType,
 new IntType(false),
 resultType),
   resultType)
   }
   // comparable types of same type
   else if (isComparable(left.resultType) && canEqual) {
 generateComparison(ctx, operator, left, right, resultType)
   }
   // generic types of same type
   else if (isRaw(left.resultType) && canEqual) {
 val Seq(resultTerm, nullTerm) = newNames("result", "isNull")
 val genericSer = ctx.addReusableTypeSerializer(left.resultType)
 val ser = s"$genericSer.getInnerSerializer()"
 val code =
   s"""
  |${left.code}
  |${right.code}
  |boolean $nullTerm = ${left.nullTerm}|| ${right.nullTerm};
  |boolean $resultTerm = ${primitiveDefaultValue(resultType)};
  |if (!$nullTerm) {
  |  ${left.resultTerm}.ensureMaterialized($ser);
  |  ${right.resultTerm}.ensureMaterialized($ser);
  |  $resultTerm =
  |${if (nonEq) "!" else 
""}${left.resultTerm}.getBinarySection().
  |  equals(${right.resultTerm}.getBinarySection());
  |}
  |""".stripMargin
 GeneratedExpression(resultTerm, nullTerm, code, resultType)
   }
   // support date/time/timestamp equalTo string.
   // for performance, we cast literal string to literal time.
   else if (isTimePoint(left.resultType) && 
isCharacterString(right.resultType)) {
 if (right.literal) {
   generateEqualAndNonEqual(
 ctx,
 left,
 generateCastLiteral(ctx, right, left.resultType),
 operator,
 resultType)
 } else {
   generateEqualAndNonEqual(
 ctx,
 left,
 generateCast(ctx, right, left.resultType, 

[jira] [Created] (FLINK-33271) CLONE - Build Release Candidate: 1.18.0-rc2

2023-10-13 Thread Jing Ge (Jira)
Jing Ge created FLINK-33271:
---

 Summary: CLONE - Build Release Candidate: 1.18.0-rc2
 Key: FLINK-33271
 URL: https://issues.apache.org/jira/browse/FLINK-33271
 Project: Flink
  Issue Type: New Feature
Affects Versions: 1.18.0
Reporter: Jing Ge
Assignee: Jing Ge


The core of the release process is the build-vote-fix cycle. Each cycle 
produces one release candidate. The Release Manager repeats this cycle until 
the community approves one release candidate, which is then finalized.

h4. Prerequisites
Set up a few environment variables to simplify Maven commands that follow. This 
identifies the release candidate being built. Start with {{RC_NUM}} equal to 1 
and increment it for each candidate:
{code}
RC_NUM="1"
TAG="release-${RELEASE_VERSION}-rc${RC_NUM}"
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27268) build sql query error in JdbcDynamicTableSource

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-27268:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> build sql query error in JdbcDynamicTableSource
> ---
>
> Key: FLINK-27268
> URL: https://issues.apache.org/jira/browse/FLINK-27268
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: chouc
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> h1. # Condidtion
> build sql query error in JdbcDynamicTableSource
>  
> {code:java}
> //代码占位符
>         StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>         StreamTableEnvironment tenv = StreamTableEnvironment.create(env);
>         String createMysqlTableMapping = "CREATE TABLE table(\n" +
>             "id int \n" +
>             " )  WITH (\n" +
>             "    'connector' = 'jdbc',\n" +
>             "    'url' = 'jdbc:mysql://s1:3306/db',\n" +
>             "    'username' = '',\n" +
>             "    'password' = '',\n" +
>             "    'table-name' = 'table_name'" +
>             ")\n";        String countSql = "select count(1) from 
> t_ds_task_instance";
>         tenv.executeSql(createMysqlTableMapping).print();
>         tenv.executeSql(countSql).print(); {code}
> h1. ERROR
> {code:java}
> //代码占位符
> Caused by: java.lang.IllegalArgumentException: open() failed.You have an 
> error in your SQL syntax; check the manual that corresponds to your MySQL 
> server version for the right syntax to use near 'FROM `table`' at line 1
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat.open(JdbcRowDataInputFormat.java:207)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:84)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> Caused by: java.sql.SQLSyntaxErrorException: You have an error in your SQL 
> syntax; check the manual that corresponds to your MySQL server version for 
> the right syntax to use near 'FROM `table`' at line 1
>   at 
> com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
>   at 
> com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
>   at 
> com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
>   at 
> com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1009)
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat.open(JdbcRowDataInputFormat.java:204)
>   ... 4 more {code}
>  
> h1. Reason
> because constants cann't be push to jdbc sql as columns, when user query 
> single constants in a table,and result to build sql state error
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30289) RateLimitedSourceReader uses wrong signal for checkpoint rate-limiting

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-30289:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> RateLimitedSourceReader uses wrong signal for checkpoint rate-limiting
> --
>
> Key: FLINK-30289
> URL: https://issues.apache.org/jira/browse/FLINK-30289
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.17.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> The checkpoint rate limiter is notified when the checkpoint is complete, but 
> since this signal comes at some point in the future (or not at all) it can 
> result in no records being emitted for a checkpoint, or more records than 
> expected being emitted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24951) Allow watch bookmarks to mitigate frequent watcher rebuilding

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24951:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Allow watch bookmarks to mitigate frequent watcher rebuilding
> -
>
> Key: FLINK-24951
> URL: https://issues.apache.org/jira/browse/FLINK-24951
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.15.0
>Reporter: guoyangze#1
>Priority: Major
> Fix For: 1.19.0
>
>
> In some production environments, there are massive pods that create and 
> delete. Thus the global resource version is updated very quickly and may 
> cause frequent watcher rebuilding because of "too old resource version". To 
> avoid this, K8s provide a Bookmark mechanism[1].
> I propose to enable bookmark by default
> [1] 
> https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31116) Support taskmanager related parameters in session mode Support job granularity setting

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-31116:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Support taskmanager related parameters in session mode Support job 
> granularity setting
> --
>
> Key: FLINK-31116
> URL: https://issues.apache.org/jira/browse/FLINK-31116
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Task
>Affects Versions: 1.16.1
>Reporter: waywtdcc
>Priority: Major
> Fix For: 1.19.0
>
>
> In session mode, taskmanager related parameters are supported and job 
> granularity settings are supported.
> If the yarn session is submitted, taskmanager.numberOfTaskSlots is set
> =2, most jobs can be configured according to this. But occasionally when 
> submitting job2, I want taskmanager to be set to 
> taskmanager.numberOfTaskSlots=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32753) Print JVM flags on AZP

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32753:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Print JVM flags on AZP
> --
>
> Key: FLINK-32753
> URL: https://issues.apache.org/jira/browse/FLINK-32753
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System / Azure Pipelines
>Reporter: Zakelly Lan
>Assignee: Zakelly Lan
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> I suggest printing JVM flags before the tests run, which could help 
> investigate the test failures (especially memory or GC related issue). An 
> example of pipeline output 
> [here|https://dev.azure.com/lzq82555906/flink-for-Zakelly/_build/results?buildId=122=logs=9dc1b5dc-bcfa-5f83-eaa7-0cb181ddc267=511d2595-ec54-5ab7-86ce-92f328796f20=165].
>  You may search 'JVM information' in this log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30613) Improve resolving schema compatibility -- Milestone one

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-30613:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Improve resolving schema compatibility -- Milestone one
> ---
>
> Key: FLINK-30613
> URL: https://issues.apache.org/jira/browse/FLINK-30613
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> In the milestone one, we should:
>  # Add an extra method 
> (TypeserializeSnapshotr#resolveSchemaCompatibility(TypeSerializerSnapshot 
> oldSerializerSnapshot)) in TypeSerializerSnapshot.java as above, and return 
> INCOMPATIBLE as default.
>  # Mark the original method as deprecated and it will use new method to 
> resolve as default.
>  # Implement the new method for all built-in TypeserializerSnapshots.
> See FLIP-263 for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32488) Introduce configuration to control ExecutionGraph cache in REST API

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32488:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Introduce configuration to control ExecutionGraph cache in REST API
> ---
>
> Key: FLINK-32488
> URL: https://issues.apache.org/jira/browse/FLINK-32488
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.16.2, 1.17.1
>Reporter: Hong Liang Teoh
>Assignee: Jufang He
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> *What*
> Currently, REST handlers that inherit from AbstractExecutionGraphHandler 
> serve information derived from a cached ExecutionGraph.
> This ExecutionGraph cache currently derives it's timeout from 
> {*}web.refresh-interval{*}. The *web.refresh-interval* controls both the 
> refresh rate of the Flink dashboard and the ExecutionGraph cache timeout. 
> We should introduce a new configuration to control the ExecutionGraph cache, 
> namely {*}rest.cache.execution-graph.expiry{*}.
> *Why*
> Sharing configuration between REST handler and Flink dashboard is a sign that 
> we are coupling the two. 
> Ideally, we want our REST API behaviour to independent of the Flink dashboard 
> (e.g. supports programmatic access).
>  
> Mailing list discussion: 
> https://lists.apache.org/thread/7o330hfyoqqkkrfhtvz3kp448jcspjrm
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25419) Support the metadata column to generate dynamic index

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-25419:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Support the metadata column to generate dynamic index 
> --
>
> Key: FLINK-25419
> URL: https://issues.apache.org/jira/browse/FLINK-25419
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Feng Jin
>Priority: Major
> Fix For: 1.19.0
>
>
> As mentioned in [https://github.com/apache/flink/pull/18058]   We can 
> implement metadata column to increase the flexibility of using dynamic 
> indexes .  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21883][scheduler] Implement cooldown period for adaptive scheduler [flink]

2023-10-13 Thread via GitHub


echauchot commented on code in PR #22985:
URL: https://github.com/apache/flink/pull/22985#discussion_r1358300145


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##
@@ -124,23 +157,74 @@ private void handleDeploymentFailure(ExecutionVertex 
executionVertex, JobExcepti
 
 @Override
 public void onNewResourcesAvailable() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
 }
 
 @Override
 public void onNewResourceRequirements() {
-maybeRescale();
+rescaleWhenCooldownPeriodIsOver();
+}
+
+/** Force rescaling as long as the target parallelism is different from 
the current one. */
+private void forceRescale() {
+if (context.shouldRescale(getExecutionGraph(), true)) {
+getLogger()
+.info(
+"Added resources are still there after {} 
time({}), force a rescale.",
+
JobManagerOptions.SCHEDULER_SCALING_INTERVAL_MAX.key(),
+scalingIntervalMax);
+context.goToRestarting(
+getExecutionGraph(),
+getExecutionGraphHandler(),
+getOperatorCoordinatorHandler(),
+Duration.ofMillis(0L),
+getFailures());
+}
 }
 
+/**
+ * Rescale the job if added resource meets {@link 
JobManagerOptions#MIN_PARALLELISM_INCREASE}.
+ * Otherwise, force a rescale after {@link 
JobManagerOptions#SCHEDULER_SCALING_INTERVAL_MAX} if
+ * the resource is still there.
+ */
 private void maybeRescale() {
-if (context.shouldRescale(getExecutionGraph())) {
-getLogger().info("Can change the parallelism of job. Restarting 
job.");
+rescaleScheduled = false;
+if (context.shouldRescale(
+getExecutionGraph(), false)) { // 
JobManagerOptions#MIN_PARALLELISM_INCREASE met

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-24901) Some further improvements of the pluggable shuffle framework

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24901:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Some further improvements of the pluggable shuffle framework
> 
>
> Key: FLINK-24901
> URL: https://issues.apache.org/jira/browse/FLINK-24901
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Priority: Major
> Fix For: 1.19.0
>
>
> This is an umbrella issue including several further improvements of the 
> pluggable shuffle framework.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31245) Adaptive scheduler does not reset the state of GlobalAggregateManager when rescaling

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-31245:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Adaptive scheduler does not reset the state of GlobalAggregateManager when 
> rescaling
> 
>
> Key: FLINK-31245
> URL: https://issues.apache.org/jira/browse/FLINK-31245
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.16.1
>Reporter: Zhanghao Chen
>Priority: Major
> Fix For: 1.19.0
>
>
> *Problem*
> GlobalAggregateManager is used to share state amongst parallel tasks in a job 
> and thus coordinate their execution. It maintains a state (the _accumulators_ 
> field in JobMaster) in JM memory. The accumulator state content is defined in 
> user code, in my company, a user stores task parallelism in the accumulator, 
> assuming task parallelism never changes. However, this assumption is broken 
> when using adaptive scheduler.
> *Possible Solutions*
>  # Mark GlobalAggregateManager as deprecated. It seems that operator 
> coordinator can completely replace GlobalAggregateManager and is a more 
> elegent solution. Therefore, it is fine to deprecate GlobalAggregateManager 
> and leave this issue there. If that's the case, we can open another ticket 
> for doing that.
>  # If we decide to continue supporting GlobalAggregateManager, then we need 
> to reset the state when rescaling.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33273) CLONE - Stage source and binary releases on dist.apache.org

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33273.
-
Fix Version/s: 1.18.0
   Resolution: Fixed

> CLONE - Stage source and binary releases on dist.apache.org
> ---
>
> Key: FLINK-33273
> URL: https://issues.apache.org/jira/browse/FLINK-33273
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
> Fix For: 1.18.0
>
>
> Copy the source release to the dev repository of dist.apache.org:
> # If you have not already, check out the Flink section of the dev repository 
> on dist.apache.org via Subversion. In a fresh directory:
> {code:bash}
> $ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
> {code}
> # Make a directory for the new release and copy all the artifacts (Flink 
> source/binary distributions, hashes, GPG signatures and the python 
> subdirectory) into that newly created directory:
> {code:bash}
> $ mkdir flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
> $ mv /tools/releasing/release/* 
> flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
> # Add and commit all the files.
> {code:bash}
> $ cd flink
> flink $ svn add flink-${RELEASE_VERSION}-rc${RC_NUM}
> flink $ svn commit -m "Add flink-${RELEASE_VERSION}-rc${RC_NUM}"
> {code}
> # Verify that files are present under 
> [https://dist.apache.org/repos/dist/dev/flink|https://dist.apache.org/repos/dist/dev/flink].
> # Push the release tag if not done already (the following command assumes to 
> be called from within the apache/flink checkout):
> {code:bash}
> $ git push  refs/tags/release-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
>  
> 
> h3. Expectations
>  * Maven artifacts deployed to the staging repository of 
> [repository.apache.org|https://repository.apache.org/content/repositories/]
>  * Source distribution deployed to the dev repository of 
> [dist.apache.org|https://dist.apache.org/repos/dist/dev/flink/]
>  * Check hashes (e.g. shasum -c *.sha512)
>  * Check signatures (e.g. {{{}gpg --verify 
> flink-1.2.3-source-release.tar.gz.asc flink-1.2.3-source-release.tar.gz{}}})
>  * {{grep}} for legal headers in each file.
>  * If time allows check the NOTICE files of the modules whose dependencies 
> have been changed in this release in advance, since the license issues from 
> time to time pop up during voting. See [Verifying a Flink 
> Release|https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release]
>  "Checking License" section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33273) CLONE - Stage source and binary releases on dist.apache.org

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33273:
---

Assignee: Jing Ge

> CLONE - Stage source and binary releases on dist.apache.org
> ---
>
> Key: FLINK-33273
> URL: https://issues.apache.org/jira/browse/FLINK-33273
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> Copy the source release to the dev repository of dist.apache.org:
> # If you have not already, check out the Flink section of the dev repository 
> on dist.apache.org via Subversion. In a fresh directory:
> {code:bash}
> $ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
> {code}
> # Make a directory for the new release and copy all the artifacts (Flink 
> source/binary distributions, hashes, GPG signatures and the python 
> subdirectory) into that newly created directory:
> {code:bash}
> $ mkdir flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
> $ mv /tools/releasing/release/* 
> flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
> # Add and commit all the files.
> {code:bash}
> $ cd flink
> flink $ svn add flink-${RELEASE_VERSION}-rc${RC_NUM}
> flink $ svn commit -m "Add flink-${RELEASE_VERSION}-rc${RC_NUM}"
> {code}
> # Verify that files are present under 
> [https://dist.apache.org/repos/dist/dev/flink|https://dist.apache.org/repos/dist/dev/flink].
> # Push the release tag if not done already (the following command assumes to 
> be called from within the apache/flink checkout):
> {code:bash}
> $ git push  refs/tags/release-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
>  
> 
> h3. Expectations
>  * Maven artifacts deployed to the staging repository of 
> [repository.apache.org|https://repository.apache.org/content/repositories/]
>  * Source distribution deployed to the dev repository of 
> [dist.apache.org|https://dist.apache.org/repos/dist/dev/flink/]
>  * Check hashes (e.g. shasum -c *.sha512)
>  * Check signatures (e.g. {{{}gpg --verify 
> flink-1.2.3-source-release.tar.gz.asc flink-1.2.3-source-release.tar.gz{}}})
>  * {{grep}} for legal headers in each file.
>  * If time allows check the NOTICE files of the modules whose dependencies 
> have been changed in this release in advance, since the license issues from 
> time to time pop up during voting. See [Verifying a Flink 
> Release|https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release]
>  "Checking License" section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33272) CLONE - Build and stage Java and Python artifacts

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33272.
-
Fix Version/s: 1.18.0
   Resolution: Fixed

> CLONE - Build and stage Java and Python artifacts
> -
>
> Key: FLINK-33272
> URL: https://issues.apache.org/jira/browse/FLINK-33272
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
> Fix For: 1.18.0
>
>
> # Create a local release branch ((!) this step can not be skipped for minor 
> releases):
> {code:bash}
> $ cd ./tools
> tools/ $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION NEW_VERSION=$RELEASE_VERSION 
> RELEASE_CANDIDATE=$RC_NUM releasing/create_release_branch.sh
> {code}
>  # Tag the release commit:
> {code:bash}
> $ git tag -s ${TAG} -m "${TAG}"
> {code}
>  # We now need to do several things:
>  ## Create the source release archive
>  ## Deploy jar artefacts to the [Apache Nexus 
> Repository|https://repository.apache.org/], which is the staging area for 
> deploying the jars to Maven Central
>  ## Build PyFlink wheel packages
> You might want to create a directory on your local machine for collecting the 
> various source and binary releases before uploading them. Creating the binary 
> releases is a lengthy process but you can do this on another machine (for 
> example, in the "cloud"). When doing this, you can skip signing the release 
> files on the remote machine, download them to your local machine and sign 
> them there.
>  # Build the source release:
> {code:bash}
> tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_source_release.sh
> {code}
>  # Stage the maven artifacts:
> {code:bash}
> tools $ releasing/deploy_staging_jars.sh
> {code}
> Review all staged artifacts ([https://repository.apache.org/]). They should 
> contain all relevant parts for each module, including pom.xml, jar, test jar, 
> source, test source, javadoc, etc. Carefully review any new artifacts.
>  # Close the staging repository on Apache Nexus. When prompted for a 
> description, enter “Apache Flink, version X, release candidate Y”.
> Then, you need to build the PyFlink wheel packages (since 1.11):
>  # Set up an azure pipeline in your own Azure account. You can refer to 
> [Azure 
> Pipelines|https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository]
>  for more details on how to set up azure pipeline for a fork of the Flink 
> repository. Note that a google cloud mirror in Europe is used for downloading 
> maven artifacts, therefore it is recommended to set your [Azure organization 
> region|https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location]
>  to Europe to speed up the downloads.
>  # Push the release candidate branch to your forked personal Flink 
> repository, e.g.
> {code:bash}
> tools $ git push  
> refs/heads/release-${RELEASE_VERSION}-rc${RC_NUM}:release-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
>  # Trigger the Azure Pipelines manually to build the PyFlink wheel packages
>  ## Go to your Azure Pipelines Flink project → Pipelines
>  ## Click the "New pipeline" button on the top right
>  ## Select "GitHub" → your GitHub Flink repository → "Existing Azure 
> Pipelines YAML file"
>  ## Select your branch → Set path to "/azure-pipelines.yaml" → click on 
> "Continue" → click on "Variables"
>  ## Then click "New Variable" button, fill the name with "MODE", and the 
> value with "release". Click "OK" to set the variable and the "Save" button to 
> save the variables, then back on the "Review your pipeline" screen click 
> "Run" to trigger the build.
>  ## You should now see a build where only the "CI build (release)" is running
>  # Download the PyFlink wheel packages from the build result page after the 
> jobs of "build_wheels mac" and "build_wheels linux" have finished.
>  ## Download the PyFlink wheel packages
>  ### Open the build result page of the pipeline
>  ### Go to the {{Artifacts}} page (build_wheels linux -> 1 artifact)
>  ### Click {{wheel_Darwin_build_wheels mac}} and {{wheel_Linux_build_wheels 
> linux}} separately to download the zip files
>  ## Unzip these two zip files
> {code:bash}
> $ cd /path/to/downloaded_wheel_packages
> $ unzip wheel_Linux_build_wheels\ linux.zip
> $ unzip wheel_Darwin_build_wheels\ mac.zip{code}
>  ## Create directory {{./dist}} under the directory of {{{}flink-python{}}}:
> {code:bash}
> $ cd 
> $ mkdir flink-python/dist{code}
>  ## Move the unzipped wheel packages to the directory of 
> {{{}flink-python/dist{}}}:
> {code:java}
> $ mv /path/to/wheel_Darwin_build_wheels\ mac/* flink-python/dist/
> $ mv /path/to/wheel_Linux_build_wheels\ linux/* flink-python/dist/
> $ cd tools{code}
> Finally, we create the binary 

[jira] [Assigned] (FLINK-33272) CLONE - Build and stage Java and Python artifacts

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33272:
---

Assignee: Jing Ge

> CLONE - Build and stage Java and Python artifacts
> -
>
> Key: FLINK-33272
> URL: https://issues.apache.org/jira/browse/FLINK-33272
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> # Create a local release branch ((!) this step can not be skipped for minor 
> releases):
> {code:bash}
> $ cd ./tools
> tools/ $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION NEW_VERSION=$RELEASE_VERSION 
> RELEASE_CANDIDATE=$RC_NUM releasing/create_release_branch.sh
> {code}
>  # Tag the release commit:
> {code:bash}
> $ git tag -s ${TAG} -m "${TAG}"
> {code}
>  # We now need to do several things:
>  ## Create the source release archive
>  ## Deploy jar artefacts to the [Apache Nexus 
> Repository|https://repository.apache.org/], which is the staging area for 
> deploying the jars to Maven Central
>  ## Build PyFlink wheel packages
> You might want to create a directory on your local machine for collecting the 
> various source and binary releases before uploading them. Creating the binary 
> releases is a lengthy process but you can do this on another machine (for 
> example, in the "cloud"). When doing this, you can skip signing the release 
> files on the remote machine, download them to your local machine and sign 
> them there.
>  # Build the source release:
> {code:bash}
> tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_source_release.sh
> {code}
>  # Stage the maven artifacts:
> {code:bash}
> tools $ releasing/deploy_staging_jars.sh
> {code}
> Review all staged artifacts ([https://repository.apache.org/]). They should 
> contain all relevant parts for each module, including pom.xml, jar, test jar, 
> source, test source, javadoc, etc. Carefully review any new artifacts.
>  # Close the staging repository on Apache Nexus. When prompted for a 
> description, enter “Apache Flink, version X, release candidate Y”.
> Then, you need to build the PyFlink wheel packages (since 1.11):
>  # Set up an azure pipeline in your own Azure account. You can refer to 
> [Azure 
> Pipelines|https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository]
>  for more details on how to set up azure pipeline for a fork of the Flink 
> repository. Note that a google cloud mirror in Europe is used for downloading 
> maven artifacts, therefore it is recommended to set your [Azure organization 
> region|https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location]
>  to Europe to speed up the downloads.
>  # Push the release candidate branch to your forked personal Flink 
> repository, e.g.
> {code:bash}
> tools $ git push  
> refs/heads/release-${RELEASE_VERSION}-rc${RC_NUM}:release-${RELEASE_VERSION}-rc${RC_NUM}
> {code}
>  # Trigger the Azure Pipelines manually to build the PyFlink wheel packages
>  ## Go to your Azure Pipelines Flink project → Pipelines
>  ## Click the "New pipeline" button on the top right
>  ## Select "GitHub" → your GitHub Flink repository → "Existing Azure 
> Pipelines YAML file"
>  ## Select your branch → Set path to "/azure-pipelines.yaml" → click on 
> "Continue" → click on "Variables"
>  ## Then click "New Variable" button, fill the name with "MODE", and the 
> value with "release". Click "OK" to set the variable and the "Save" button to 
> save the variables, then back on the "Review your pipeline" screen click 
> "Run" to trigger the build.
>  ## You should now see a build where only the "CI build (release)" is running
>  # Download the PyFlink wheel packages from the build result page after the 
> jobs of "build_wheels mac" and "build_wheels linux" have finished.
>  ## Download the PyFlink wheel packages
>  ### Open the build result page of the pipeline
>  ### Go to the {{Artifacts}} page (build_wheels linux -> 1 artifact)
>  ### Click {{wheel_Darwin_build_wheels mac}} and {{wheel_Linux_build_wheels 
> linux}} separately to download the zip files
>  ## Unzip these two zip files
> {code:bash}
> $ cd /path/to/downloaded_wheel_packages
> $ unzip wheel_Linux_build_wheels\ linux.zip
> $ unzip wheel_Darwin_build_wheels\ mac.zip{code}
>  ## Create directory {{./dist}} under the directory of {{{}flink-python{}}}:
> {code:bash}
> $ cd 
> $ mkdir flink-python/dist{code}
>  ## Move the unzipped wheel packages to the directory of 
> {{{}flink-python/dist{}}}:
> {code:java}
> $ mv /path/to/wheel_Darwin_build_wheels\ mac/* flink-python/dist/
> $ mv /path/to/wheel_Linux_build_wheels\ linux/* flink-python/dist/
> $ cd tools{code}
> Finally, we create the binary convenience release files:
> {code:bash}
> tools $ 

[jira] [Assigned] (FLINK-32921) Prepare Flink 1.18 Release

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-32921:
---

Assignee: Jing Ge

> Prepare Flink 1.18 Release
> --
>
> Key: FLINK-32921
> URL: https://issues.apache.org/jira/browse/FLINK-32921
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Major
>
> This umbrella issue is meant as a test balloon for moving the [release 
> documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
>  into Jira.
> h3. Prerequisites
> h4. Environment Variables
> Commands in the subtasks might expect some of the following enviroment 
> variables to be set accordingly to the version that is about to be released:
> {code:bash}
> RELEASE_VERSION="1.5.0"
> SHORT_RELEASE_VERSION="1.5"
> CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
> NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
> SHORT_NEXT_SNAPSHOT_VERSION="1.6"
> {code}
> h4. Build Tools
> All of the following steps require to use Maven 3.8.6 and Java 8. Modify your 
> PATH environment variable accordingly if needed.
> h4. Flink Source
>  * Create a new directory for this release and clone the Flink repository 
> from Github to ensure you have a clean workspace (this step is optional).
>  * Run {{mvn -Prelease clean install}} to ensure that the build processes 
> that are specific to that profile are in good shape (this step is optional).
> The rest of this instructions assumes that commands are run in the root (or 
> {{./tools}} directory) of a repository on the branch of the release version 
> with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24680) Expose UserCodeClassLoader in OperatorCoordinator.Context for registering release hooks

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24680:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Expose UserCodeClassLoader in OperatorCoordinator.Context for registering 
> release hooks
> ---
>
> Key: FLINK-24680
> URL: https://issues.apache.org/jira/browse/FLINK-24680
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently `OperatorCoordinator.Context` only exposes `ClassLoader` for 
> accessing user code class loader, which doesn't support adding release hooks 
> in `OperatorCoordinator`, like sync releasing class loader and closing 
> operator coordinator. 
> We need to expose `UserCodeClassLoader` in the context fo registering hooks 
> in coordinator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29040) When using the JDBC Catalog, the custom cannot be applied because it is fixed in the code

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-29040:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> When using the JDBC Catalog, the custom cannot be applied because it is fixed 
> in the code
> -
>
> Key: FLINK-29040
> URL: https://issues.apache.org/jira/browse/FLINK-29040
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
> Environment: flink-1.16-SNAPSHOT
>Reporter: Zhimin Geng
>Assignee: Zhimin Geng
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
> Attachments: 截图_选择区域_20220819143458.png
>
>
> 使用JDBC catalog 时,自定义的无法应用,因为代码中是固定的。
> When using the JDBC Catalog, the custom cannot be applied because it is fixed 
> in the code.
> 我在做ClickHouse的JDBC catalog测试时,无法直接使用发行版的代码。
> When I was testing ClickHouse's JDBC Catalog, I couldn't use the 
> distribution's code directly.
> JDBC catalog未来应该会持续拓展,所以建议采用别的方式来实例化JdbcCatalog。
> The JDBC Catalog should continue to expand in the future, so it is 
> recommended to instantiate JdbcCatalog in a different way.
> 稍后我会提交一个PR,希望可以采用这种方式,或者类似方式来实例化JdbcCatalog。
> I will submit a PR later and hopefully instantiate JdbcCatalog this way, or 
> something similar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32444) Enable object reuse for Flink SQL jobs by default

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32444:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Enable object reuse for Flink SQL jobs by default
> -
>
> Key: FLINK-32444
> URL: https://issues.apache.org/jira/browse/FLINK-32444
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently, object reuse is not enabled by default for Flink Streaming Jobs, 
> but is enabled by default for Flink Batch jobs. That is not consistent for 
> stream-batch unification. Besides, SQL operators are safe to enable object 
> reuse and this is a great performance improvement for SQL jobs. 
> We should also be careful with the Table-DataStream conversion case 
> (StreamTableEnvironment) which is not safe to enable object reuse by default. 
> Maybe we can just enable it for SQL Client/Gateway and TableEnvironment. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24122) Add support to do clean in history server

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24122:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Add support to do clean in history server
> -
>
> Key: FLINK-24122
> URL: https://issues.apache.org/jira/browse/FLINK-24122
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: zlzhang0122
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Now, the history server can clean history jobs by two means:
>  # if users have configured 
> {code:java}
> historyserver.archive.clean-expired-jobs: true{code}
> , then compare the files in hdfs over two clean interval and find the delete 
> and clean the local cache file.
>  # if users have configured the 
> {code:java}
> historyserver.archive.retained-jobs:{code}
> a positive number, then clean the oldest files in hdfs and local.
> But the retained-jobs number is difficult to determine.
> For example, users may want to check the history jobs yesterday while many 
> jobs failed today and exceed the retained-jobs number, then the history jobs 
> of yesterday will be delete. So what if add a configuration which contain a 
> retained-times that indicate the max time the history job retain?
> Also it can't clean the job history files which was no longer in hdfs but 
> still cached in local filesystem and these files will store forever and can't 
> be cleaned unless users manually do this. Maybe we can give a option and do 
> this clean if the option says true.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33272) CLONE - Build and stage Java and Python artifacts

2023-10-13 Thread Jing Ge (Jira)
Jing Ge created FLINK-33272:
---

 Summary: CLONE - Build and stage Java and Python artifacts
 Key: FLINK-33272
 URL: https://issues.apache.org/jira/browse/FLINK-33272
 Project: Flink
  Issue Type: Sub-task
Reporter: Jing Ge


# Create a local release branch ((!) this step can not be skipped for minor 
releases):
{code:bash}
$ cd ./tools
tools/ $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION NEW_VERSION=$RELEASE_VERSION 
RELEASE_CANDIDATE=$RC_NUM releasing/create_release_branch.sh
{code}
 # Tag the release commit:
{code:bash}
$ git tag -s ${TAG} -m "${TAG}"
{code}
 # We now need to do several things:
 ## Create the source release archive
 ## Deploy jar artefacts to the [Apache Nexus 
Repository|https://repository.apache.org/], which is the staging area for 
deploying the jars to Maven Central
 ## Build PyFlink wheel packages
You might want to create a directory on your local machine for collecting the 
various source and binary releases before uploading them. Creating the binary 
releases is a lengthy process but you can do this on another machine (for 
example, in the "cloud"). When doing this, you can skip signing the release 
files on the remote machine, download them to your local machine and sign them 
there.
 # Build the source release:
{code:bash}
tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_source_release.sh
{code}
 # Stage the maven artifacts:
{code:bash}
tools $ releasing/deploy_staging_jars.sh
{code}
Review all staged artifacts ([https://repository.apache.org/]). They should 
contain all relevant parts for each module, including pom.xml, jar, test jar, 
source, test source, javadoc, etc. Carefully review any new artifacts.
 # Close the staging repository on Apache Nexus. When prompted for a 
description, enter “Apache Flink, version X, release candidate Y”.
Then, you need to build the PyFlink wheel packages (since 1.11):
 # Set up an azure pipeline in your own Azure account. You can refer to [Azure 
Pipelines|https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository]
 for more details on how to set up azure pipeline for a fork of the Flink 
repository. Note that a google cloud mirror in Europe is used for downloading 
maven artifacts, therefore it is recommended to set your [Azure organization 
region|https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location]
 to Europe to speed up the downloads.
 # Push the release candidate branch to your forked personal Flink repository, 
e.g.
{code:bash}
tools $ git push  
refs/heads/release-${RELEASE_VERSION}-rc${RC_NUM}:release-${RELEASE_VERSION}-rc${RC_NUM}
{code}
 # Trigger the Azure Pipelines manually to build the PyFlink wheel packages
 ## Go to your Azure Pipelines Flink project → Pipelines
 ## Click the "New pipeline" button on the top right
 ## Select "GitHub" → your GitHub Flink repository → "Existing Azure Pipelines 
YAML file"
 ## Select your branch → Set path to "/azure-pipelines.yaml" → click on 
"Continue" → click on "Variables"
 ## Then click "New Variable" button, fill the name with "MODE", and the value 
with "release". Click "OK" to set the variable and the "Save" button to save 
the variables, then back on the "Review your pipeline" screen click "Run" to 
trigger the build.
 ## You should now see a build where only the "CI build (release)" is running
 # Download the PyFlink wheel packages from the build result page after the 
jobs of "build_wheels mac" and "build_wheels linux" have finished.
 ## Download the PyFlink wheel packages
 ### Open the build result page of the pipeline
 ### Go to the {{Artifacts}} page (build_wheels linux -> 1 artifact)
 ### Click {{wheel_Darwin_build_wheels mac}} and {{wheel_Linux_build_wheels 
linux}} separately to download the zip files
 ## Unzip these two zip files
{code:bash}
$ cd /path/to/downloaded_wheel_packages
$ unzip wheel_Linux_build_wheels\ linux.zip
$ unzip wheel_Darwin_build_wheels\ mac.zip{code}
 ## Create directory {{./dist}} under the directory of {{{}flink-python{}}}:
{code:bash}
$ cd 
$ mkdir flink-python/dist{code}
 ## Move the unzipped wheel packages to the directory of 
{{{}flink-python/dist{}}}:
{code:java}
$ mv /path/to/wheel_Darwin_build_wheels\ mac/* flink-python/dist/
$ mv /path/to/wheel_Linux_build_wheels\ linux/* flink-python/dist/
$ cd tools{code}

Finally, we create the binary convenience release files:
{code:bash}
tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_binary_release.sh
{code}
If you want to run this step in parallel on a remote machine you have to make 
the release commit available there (for example by pushing to a repository). 
*This is important: the commit inside the binary builds has to match the commit 
of the source builds and the tagged release commit.* 
When building remotely, you can skip gpg signing by setting 

[jira] [Updated] (FLINK-26761) Fix the cast exception thrown by PreValidateReWriter when insert into/overwrite a partitioned table.

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-26761:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Fix the cast exception thrown by PreValidateReWriter when insert 
> into/overwrite a partitioned table.
> 
>
> Key: FLINK-26761
> URL: https://issues.apache.org/jira/browse/FLINK-26761
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: zoucao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> In `PreValidateReWriter#appendPartitionAndNullsProjects`, we should use
> {code:java}
> val names = sqlInsert.getTargetTableID.asInstanceOf[SqlIdentifier].names
> {code}
> to get the table name, instead of
> {code:java}
> val names = sqlInsert.getTargetTable.asInstanceOf[SqlIdentifier].names
> {code}
> when we execute the following sql:
> {code:java}
> insert into/overwrite table_name /*+ options(xxx) */ partition(xxx) select  
> {code}
> invoke `sqlInsert.getTargetTable` will get a SqlTableRef, which can not be 
> cast to SqlIdentifier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-22826:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> flink sql1.13.1 causes data loss based on change log stream data join
> -
>
> Key: FLINK-22826
> URL: https://issues.apache.org/jira/browse/FLINK-22826
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0, 1.13.1
>Reporter: 徐州州
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-blocker
> Fix For: 1.19.0
>
>
> {code:java}
> insert into dwd_order_detail
> select
>ord.Id,
>ord.Code,
>Status
>  concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id  
> as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd'))  as uuids,
>  TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date
> from
> orders ord
> left join order_extend oed on  ord.Id=oed.OrderId and oed.IsDeleted=0 and 
> oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS 
> TIMESTAMP)
> or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> ) and ord.IsDeleted=0;
> {code}
> My upsert-kafka table for PRIMARY KEY for uuids.
> This is the logic of my kafka based canal-json stream data join and write to 
> Upsert-kafka tables I confirm that version 1.12 also has this problem I just 
> upgraded from 1.12 to 1.13.
> I look up a user s order data and order number XJ0120210531004794 in 
> canal-json original table as U which is normal.
> {code:java}
> | +U | XJ0120210531004794 |  50 |
> | +U | XJ0120210531004672 |  50 |
> {code}
> But written to upsert-kakfa via join, the data consumed from upsert kafka is,
> {code:java}
> | +I | XJ0120210531004794 |  50 |
> | -U | XJ0120210531004794 |  50 |
> {code}
> The order is two records this sheet in orders and order_extend tables has not 
> changed since created -U status caused my data loss not computed and the 
> final result was wrong.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20818) End to end test produce excessive amount of logs

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-20818:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> End to end test produce excessive amount of logs
> 
>
> Key: FLINK-20818
> URL: https://issues.apache.org/jira/browse/FLINK-20818
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> The end to end test produce an excessive amount of logs. For example in this 
> run [1] the log file is roughly 57 MB and it is no longer possible to 
> properly scroll in this file when using the web interface. I think there 
> should not be a reason for producing almost 60 MB of log output.
> [1] 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11467=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26692) migrate TpcdsTestProgram.java to new source

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-26692:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> migrate TpcdsTestProgram.java to new source
> ---
>
> Key: FLINK-26692
> URL: https://issues.apache.org/jira/browse/FLINK-26692
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: zhouli
>Assignee: zhouli
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> [run-nightly-tests.sh#L220|https://github.com/apache/flink/blob/master/flink-end-to-end-tests/run-nightly-tests.sh#L220]
>  run TpcdsTestProgram which uses the legacy source with 
> AdaptiveBatchScheduler, since there are some known issues (FLINK-26576 , 
> FLINK-26548 )about legacy source, I think we should migrate TpcdsTestProgram 
> to new source asap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22607) Examples use deprecated AscendingTimestampExtractor

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-22607:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Examples use deprecated AscendingTimestampExtractor
> ---
>
> Key: FLINK-22607
> URL: https://issues.apache.org/jira/browse/FLINK-22607
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.12.3
>Reporter: Linying Assad
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.19.0
>
>
> The streaming examples 
> [TopSpeedWindowing|https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/TopSpeedWindowing.java]
>  demonstrates that the generating watermarks function part uses the 
> deprecated 
> {color:#0747a6}_DataStream#assignTimestampsAndWatermarks(AscendingTimestampExtractor)_{color},
> which is recommended in the relevant [Flink 
> docs|https://ci.apache.org/projects/flink/flink-docs-stable/docs/dev/datastream/event-time/generating_watermarks/#using-watermark-strategies]
>  is recommended to use 
> {color:#0747a6}_DataStream#assignTimestampsAndWatermarks(WatermarkStrategy)_{color}
>  instead.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33274) CLONE - Propose a pull request for website updates

2023-10-13 Thread Jing Ge (Jira)
Jing Ge created FLINK-33274:
---

 Summary: CLONE - Propose a pull request for website updates
 Key: FLINK-33274
 URL: https://issues.apache.org/jira/browse/FLINK-33274
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.18.0
Reporter: Jing Ge


The final step of building the candidate is to propose a website pull request 
containing the following changes:
 # update 
[apache/flink-web:_config.yml|https://github.com/apache/flink-web/blob/asf-site/_config.yml]
 ## update {{FLINK_VERSION_STABLE}} and {{FLINK_VERSION_STABLE_SHORT}} as 
required
 ## update version references in quickstarts ({{{}q/{}}} directory) as required
 ## (major only) add a new entry to {{flink_releases}} for the release binaries 
and sources
 ## (minor only) update the entry for the previous release in the series in 
{{flink_releases}}
 ### Please pay notice to the ids assigned to the download entries. They should 
be unique and reflect their corresponding version number.
 ## add a new entry to {{release_archive.flink}}
 # add a blog post announcing the release in _posts
 # add a organized release notes page under docs/content/release-notes and 
docs/content.zh/release-notes (like 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.15/]).
 The page is based on the non-empty release notes collected from the issues, 
and only the issues that affect existing users should be included (e.g., 
instead of new functionality). It should be in a separate PR since it would be 
merged to the flink project.

(!) Don’t merge the PRs before finalizing the release.

 

h3. Expectations
 * Website pull request proposed to list the 
[release|http://flink.apache.org/downloads.html]
 * (major only) Check {{docs/config.toml}} to ensure that
 ** the version constants refer to the new version
 ** the {{baseurl}} does not point to {{flink-docs-master}}  but 
{{flink-docs-release-X.Y}} instead



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-14500) Support Flink Python User-Defined Stateless Function for Table - Phase 2

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-14500:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Support Flink Python User-Defined Stateless Function for Table - Phase 2
> 
>
> Key: FLINK-14500
> URL: https://issues.apache.org/jira/browse/FLINK-14500
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Fix For: 1.19.0
>
>
> This is the umbrella Jira which tracks the functionalities of "Python 
> User-Defined Stateless Function for Table" which are planned to be supported 
> in 1.11, such as docker mode support, user-defined metrics support, arrow 
> support, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-16531) Add full integration tests for "GROUPING SETS" for streaming mode

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-16531:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Add full integration tests for "GROUPING SETS" for streaming mode
> -
>
> Key: FLINK-16531
> URL: https://issues.apache.org/jira/browse/FLINK-16531
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> We have a plan test for GROUPING SETS for streaming mode, i.e. 
> {{GroupingSetsTest}}. But we should also have a full IT coverage for it, just 
> like batch's 
> {{org.apache.flink.table.planner.runtime.batch.sql.agg.GroupingSetsITCase}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19038) It doesn't support to call Table.limit() continuously

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19038:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> It doesn't support to call Table.limit() continuously
> -
>
> Key: FLINK-19038
> URL: https://issues.apache.org/jira/browse/FLINK-19038
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.19.0
>
>
> For example, table.limit(3).limit(2) will failed with "FETCH is already 
> defined." 
> {code}
> org.apache.flink.table.api.ValidationException: FETCH is already defined.
>   at 
> org.apache.flink.table.operations.utils.SortOperationFactory.validateAndGetChildSort(SortOperationFactory.java:125)
>   at 
> org.apache.flink.table.operations.utils.SortOperationFactory.createLimitWithFetch(SortOperationFactory.java:105)
>   at 
> org.apache.flink.table.operations.utils.OperationTreeBuilder.limitWithFetch(OperationTreeBuilder.java:418)
> {code}
> However, as we support to call table.limit() without specifying the order, I 
> guess this should be a valid usage and should be allowed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19499) Expose Metric Groups to Split Assigners

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19499:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Expose Metric Groups to Split Assigners
> ---
>
> Key: FLINK-19499
> URL: https://issues.apache.org/jira/browse/FLINK-19499
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> Split Assigners should have access to metric groups, so they can report 
> metrics on assignment, like pending splits, local-, and remote assignments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25205) Optimize SinkUpsertMaterializer

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-25205:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Optimize SinkUpsertMaterializer
> ---
>
> Key: FLINK-25205
> URL: https://issues.apache.org/jira/browse/FLINK-25205
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Assignee: lincoln lee
>Priority: Major
> Fix For: 1.19.0
>
> Attachments: flamegraph-final.html, with-SinkUpsertMaterializer.png, 
> without-SinkUpsertMaterializer.png
>
>
> SinkUpsertMaterializer maintains incoming records in state corresponding to 
> the upsert keys and generates an upsert view for the downstream operator.
> It is intended to solve the messy order problem caused by the upstream 
> computation, but it stores the data in the state, which will get bigger and 
> bigger.
> If we can think that the disorder only occurs within the checkpoint, we can 
> consider cleaning up the state of each checkpoint, which can control the size 
> of the state.
> We can consider adding an optimized config option first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [DEBUG] Test misc things for HBase [flink-connector-hbase]

2023-10-13 Thread via GitHub


MartijnVisser closed pull request #28: [DEBUG] Test misc things for HBase
URL: https://github.com/apache/flink-connector-hbase/pull/28


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-17362) Improve table examples to reflect latest status

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-17362:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Improve table examples to reflect latest status
> ---
>
> Key: FLINK-17362
> URL: https://issues.apache.org/jira/browse/FLINK-17362
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: Kurt Young
>Priority: Minor
> Fix For: 1.19.0
>
>
> Currently the table examples seems outdated, especially after blink planner 
> becomes the default choice. We might need to refactor the structure of all 
> examples, and cover the following items:
>  # streaming sql & table api examples
>  # batch sql & table api examples
>  # table/sql & datastream interoperation
>  # table/sql & dataset interoperation
>  # DDL & DML examples



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24048) Move changeLog inference out of optimizing phase

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24048:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Move changeLog inference out of optimizing phase
> 
>
> Key: FLINK-24048
> URL: https://issues.apache.org/jira/browse/FLINK-24048
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: Shuo Cheng
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently, when there are multiple sinks in a sql job, the DAG is split into 
> multiple relNode blocks; as changeLog inference is in optimizing phase, we 
> need to propagate the changeLog mode among blocks to ensure each block can 
> generate an accurate physical plan.
> In current solution, the DAG is optimized 3 times in order to propagate 
> changeLog mode, which is inefficient. Actually, we can just optimize the DAG, 
> expanding the DAG to a physical node tree, and then infer changeLog mode. In 
> this way, the dag is only optimized 1 time.
> (Similarly, minibatch interval can also be inferred in same way)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19362) Remove confusing comment for `DOT` operator codegen

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-19362:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Remove confusing comment for `DOT` operator codegen
> ---
>
> Key: FLINK-19362
> URL: https://issues.apache.org/jira/browse/FLINK-19362
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.19.0
>
>
> `DOT` operator codegen (ExprCodeGenerator#generateCallExpression) has comment 
> as following:
> {code:java}
> // due to https://issues.apache.org/jira/browse/CALCITE-2162, expression such 
> as
> // "array[1].a.b" won't work now.
> if (operands.size > 2) {
>   throw new CodeGenException(
> "A DOT operator with more than 2 operands is not supported yet.")
> }
> {code}
> But `array[1].a.b` actually can work for flink job. `DOT` will be transform 
> to `RexFieldAccess` for CALCITE-2542. And `generateDot` will never be invoked 
>  except suppporting ITEM for ROW types.
> Simply, I think we can only delete the comment which is confusing. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22484) Built-in functions for collections

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-22484:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Built-in functions for collections
> --
>
> Key: FLINK-22484
> URL: https://issues.apache.org/jira/browse/FLINK-22484
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> There is a number of built-in functions to work with collections are 
> supported by other vendors. After looking at Postgresql, BigQuery, Spark 
> there was selected a list of more or less generic functions for collections 
> (for more details see [1]).
> Feedback for the doc is  welcome
> [1] 
> [https://docs.google.com/document/d/1nS0Faur9CCop4sJoQ2kMQ2XU1hjg1FaiTSQp2RsZKEE/edit?usp=sharing]
> MAP_KEYS
> MAP_VALUES
> MAP_FROM_ARRAYS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24517) Streamline Flink releases

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24517:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Streamline Flink releases
> -
>
> Key: FLINK-24517
> URL: https://issues.apache.org/jira/browse/FLINK-24517
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Release System
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>
> Collection of changes that I'd like to make based on recent experiences with 
> the 1.13.3 release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27432) Replace Time with Duration in TaskSlotTable

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-27432:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Replace Time with Duration in TaskSlotTable
> ---
>
> Key: FLINK-27432
> URL: https://issues.apache.org/jira/browse/FLINK-27432
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25862) Refactor SharedStateRegistry to not limit StreamStateHandle to register/unregister

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-25862:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Refactor SharedStateRegistry to not limit StreamStateHandle to 
> register/unregister
> --
>
> Key: FLINK-25862
> URL: https://issues.apache.org/jira/browse/FLINK-25862
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Feifan Wang
>Priority: Minor
> Fix For: 1.19.0
>
>
> Current implementation of SharedStateRegistry would use `StreamStateHandle` 
> to register and unregister. This would limit the usage for other componments, 
> such as change-log state backend handle usage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24179) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee fails on azure

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-24179:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee fails on azure
> ---
>
> Key: FLINK-24179
> URL: https://issues.apache.org/jira/browse/FLINK-24179
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23626=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=918e890f-5ed9-5212-a25e-962628fb4bc5=7339
> {code}
> Sep 06 23:42:30 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 59.927 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase
> Sep 06 23:42:30 [ERROR] testRecoveryWithExactlyOnceGuarantee  Time elapsed: 
> 10.505 s  <<< FAILURE!
> Sep 06 23:42:30 java.lang.AssertionError: expected:<[1, 2, 3, 4, 5, 6]> but 
> was:<[1, 2, 3, 4]>
> Sep 06 23:42:30   at org.junit.Assert.fail(Assert.java:89)
> Sep 06 23:42:30   at org.junit.Assert.failNotEquals(Assert.java:835)
> Sep 06 23:42:30   at org.junit.Assert.assertEquals(Assert.java:120)
> Sep 06 23:42:30   at org.junit.Assert.assertEquals(Assert.java:146)
> Sep 06 23:42:30   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.lambda$testRecoveryWithExactlyOnceGuarantee$1(KafkaSinkITCase.java:201)
> Sep 06 23:42:30   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithAssertion(KafkaSinkITCase.java:320)
> Sep 06 23:42:30   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee(KafkaSinkITCase.java:198)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-15740) Remove Deadline#timeLeft()

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-15740:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Remove Deadline#timeLeft()
> --
>
> Key: FLINK-15740
> URL: https://issues.apache.org/jira/browse/FLINK-15740
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Tests
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.19.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As shown in FLINK-13662, {{Deadline#timeLeft()}} is conceptually broken since 
> there is no reliable way to call said method while ensuring that
>  a) the value is non-negative (desired since most time-based APIs reject 
> negative values)
>  b) the value sign (+,-) corresponds to preceding calls to {{#hasTimeLeft()}}
>  
> As a result any usage of the following form is unreliable and obfuscating 
> error messages.
> {code:java}
> while (deadline.hasTimeLeft()) {
>   doSomething(deadline.timeLeft());
> } {code}
>  
> All existing usage should be migrate to either
> {code:java}
> while (deadline.hasTimeLeft()) {
>   doSomething();
> } {code}
> or
> {code:java}
> while (true) {
>   doSomething(deadline.timeLeftIfAny());
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32074) Support file merging across checkpoints

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32074:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Support file merging across checkpoints
> ---
>
> Key: FLINK-32074
> URL: https://issues.apache.org/jira/browse/FLINK-32074
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Zakelly Lan
>Assignee: Han Yin
>Priority: Major
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25900) Create view example does not assign alias to functions resulting in generated names like EXPR$5

2023-10-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-25900:

Fix Version/s: 1.19.0
   (was: 1.18.0)

> Create view example does not assign alias to functions resulting in generated 
> names like EXPR$5
> ---
>
> Key: FLINK-25900
> URL: https://issues.apache.org/jira/browse/FLINK-25900
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.14.3
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.19.0
>
>
> The create view example query:
> {noformat}
> Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, 
> CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), 
> NOW(), PROCTIME();
> {noformat}
> produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() 
> (EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown 
> below:
> {code:java}
> Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, 
> CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), 
> NOW(), PROCTIME();
> > 
> Flink SQL> describe MyView1;
> +---+-+---+-++---+
> |              name |                        type |  null | key | extras | 
> watermark |
> +---+-+---+-++---+
> |         LOCALTIME |                     TIME(0) | FALSE |     |        |    
>        |
> |    LOCALTIMESTAMP |                TIMESTAMP(3) | FALSE |     |        |    
>        |
> |      CURRENT_DATE |                        DATE | FALSE |     |        |    
>        |
> |      CURRENT_TIME |                     TIME(0) | FALSE |     |        |    
>        |
> | CURRENT_TIMESTAMP |            TIMESTAMP_LTZ(3) | FALSE |     |        |    
>        |
> |            EXPR$5 |            TIMESTAMP_LTZ(3) | FALSE |     |        |    
>        |
> |            EXPR$6 |            TIMESTAMP_LTZ(3) | FALSE |     |        |    
>        |
> |            EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE |     |        |    
>        |
> +---+-+---+-++---+
> 8 rows in set
>  
> {code}
>  
> The documentation shows aliased names 
> [Timezone|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/timezone/#decide-time-functions-return-value]
> {code:java}
> ++-+---+-++---+
> |   name |type |  null | key | extras 
> | watermark |
> ++-+---+-++---+
> |  LOCALTIME | TIME(0) | false | |
> |   |
> | LOCALTIMESTAMP |TIMESTAMP(3) | false | |
> |   |
> |   CURRENT_DATE |DATE | false | |
> |   |
> |   CURRENT_TIME | TIME(0) | false | |
> |   |
> |  CURRENT_TIMESTAMP |TIMESTAMP_LTZ(3) | false | |
> |   |
> |CURRENT_ROW_TIMESTAMP() |TIMESTAMP_LTZ(3) | false | |
> |   |
> |  NOW() |TIMESTAMP_LTZ(3) | false | |
> |   |
> | PROCTIME() | TIMESTAMP_LTZ(3) *PROCTIME* | false | |
> |   |
> ++-+---+-++---+
>  {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   >