[ANNOUNCE] Apache Flink 1.20.0 released

2024-08-02 Thread weijie guo
The Apache Flink community is very happy to announce the release of Apache

Flink 1.20.0, which is the first release for the Apache Flink 1.20 series.


Apache Flink® is an open-source stream processing framework for

distributed, high-performing, always-available, and accurate data streaming

applications.


The release is available for download at:

https://flink.apache.org/downloads.html


Please check out the release blog post for an overview of the improvements
for this release:

https://flink.apache.org/2024/08/02/announcing-the-release-of-apache-flink-1.20/


The full release notes are available in Jira:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354210


We would like to thank all contributors of the Apache Flink community who

made this release possible!


Best,

Robert, Rui, Ufuk, Weijie


[jira] [Created] (FLINK-35959) CLONE - Updates the docs stable version

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35959:
--

 Summary: CLONE - Updates the docs stable version
 Key: FLINK-35959
 URL: https://issues.apache.org/jira/browse/FLINK-35959
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Update docs to "stable" in {{docs/config.toml}} in the branch of the 
_just-released_ version:
 * Change V{{{}ersion{}}} from {{{}x.y-SNAPSHOT }}to \{{{}x.y.z{}}}, i.e. 
{{1.6-SNAPSHOT}} to {{1.6.0}}
 * Change V{{{}ersionTitle{}}} from {{x.y-SNAPSHOT}} to {{{}x.y{}}}, i.e. 
{{1.6-SNAPSHOT}} to {{1.6}}
 * Change Branch from {{master}} to {{{}release-x.y{}}}, i.e. {{master}} to 
{{release-1.6}}
 * Change {{baseURL}} from 
{{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
 to 
{{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
 * Change {{javadocs_baseurl}} from 
{{//[ci.apache.org/projects/flink/flink-docs-master|http://ci.apache.org/projects/flink/flink-docs-master]}}
 to 
{{//[ci.apache.org/projects/flink/flink-docs-release-x.y|http://ci.apache.org/projects/flink/flink-docs-release-x.y]}}
 * Change {{IsStable}} to {{true}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35960) CLONE - Start End of Life discussion thread for now outdated Flink minor version

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35960:
--

 Summary: CLONE - Start End of Life discussion thread for now 
outdated Flink minor version
 Key: FLINK-35960
 URL: https://issues.apache.org/jira/browse/FLINK-35960
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo


The idea is to discuss whether we should do a final release for the now not 
supported minor version in the community. Such a minor release shouldn't be 
covered by the current minor version release managers. Their only 
responsibility is to trigger the discussion.

The intention of a final patch release for the now unsupported Flink minor 
version is to flush out all the fixes that didn't end up in the previous 
release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35957) CLONE - Other announcements

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35957:
--

 Summary: CLONE - Other announcements
 Key: FLINK-35957
 URL: https://issues.apache.org/jira/browse/FLINK-35957
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: Lincoln Lee


h3. Recordkeeping

Use [reporter.apache.org|https://reporter.apache.org/addrelease.html?flink] to 
seed the information about the release into future project reports.

(Note: Only PMC members have access report releases. If you do not have access, 
ask on the mailing list for assistance.)
h3. Flink blog

Major or otherwise important releases should have a blog post. Write one if 
needed for this particular release. Minor releases that don’t introduce new 
major functionality don’t necessarily need to be blogged (see [flink-web PR 
#581 for Flink 1.15.3|https://github.com/apache/flink-web/pull/581] as an 
example for a minor release blog post).

Please make sure that the release notes of the documentation (see section 
"Review and update documentation") are linked from the blog post of a major 
release.
We usually include the names of all contributors in the announcement blog post. 
Use the following command to get the list of contributors:
{code}
# first line is required to make sort first with uppercase and then lower
export LC_ALL=C
export FLINK_PREVIOUS_RELEASE_BRANCH=
export FLINK_CURRENT_RELEASE_BRANCH=
# e.g.
# export FLINK_PREVIOUS_RELEASE_BRANCH=release-1.17
# export FLINK_CURRENT_RELEASE_BRANCH=release-1.18
git log $(git merge-base master $FLINK_PREVIOUS_RELEASE_BRANCH)..$(git show-ref 
--hash ${FLINK_CURRENT_RELEASE_BRANCH}) --pretty=format:"%an%n%cn" | sort  -u | 
paste -sd, | sed "s/\,/\, /g"
{code}
h3. Social media

Tweet, post on Facebook, LinkedIn, and other platforms. Ask other contributors 
to do the same.
h3. Flink Release Wiki page

Add a summary of things that went well or that went not so well during the 
release process. This can include feedback from contributors but also more 
generic things like the release have taken longer than initially anticipated 
(and why) to give a bit of context to the release process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35958) CLONE - Update reference data for Migration Tests

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35958:
--

 Summary: CLONE - Update reference data for Migration Tests
 Key: FLINK-35958
 URL: https://issues.apache.org/jira/browse/FLINK-35958
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee
 Fix For: 1.20.0, 1.19.1


Update migration tests in master to cover migration from new version. Since 
1.18, this step could be done automatically with the following steps. For more 
information please refer to [this 
page.|https://github.com/apache/flink/blob/master/flink-test-utils-parent/flink-migration-test-utils/README.md]
 # {*}On the published release tag (e.g., release-1.16.0){*}, run 
{panel}
{panel}
|{{$ mvn clean }}{{package}} {{{}-Pgenerate-migration-test-data 
-Dgenerate.version={}}}{{{}1.16{}}} {{-nsu -Dfast -DskipTests}}|

The version (1.16 in the command above) should be replaced with the target one.

 # Modify the content of the file 
[apache/flink:flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version|https://github.com/apache/flink/blob/master/flink-test-utils-parent/flink-migration-test-utils/src/main/resources/most_recently_published_version]
 to the latest version (it would be "v1_16" if sticking to the example where 
1.16.0 was released). 
 # Commit the modification in step a and b with "{_}[release] Generate 
reference data for state migration tests based on release-1.xx.0{_}" to the 
corresponding release branch (e.g. {{release-1.16}} in our example), replace 
"xx" with the actual version (in this example "16"). You should use the Jira 
issue ID in case of [release]  as the commit message's prefix if you have a 
dedicated Jira issue for this task.

 # Cherry-pick the commit to the master branch. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35954) CLONE - Merge website pull request

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35954:
--

 Summary: CLONE - Merge website pull request
 Key: FLINK-35954
 URL: https://issues.apache.org/jira/browse/FLINK-35954
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Merge the website pull request to [list the 
release|http://flink.apache.org/downloads.html]. Make sure to regenerate the 
website as well, as it isn't build automatically.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35955) CLONE - Remove outdated versions

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35955:
--

 Summary: CLONE - Remove outdated versions
 Key: FLINK-35955
 URL: https://issues.apache.org/jira/browse/FLINK-35955
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo


h4. dist.apache.org

For a new major release remove all release files older than 2 versions, e.g., 
when releasing 1.7, remove all releases <= 1.5.

For a new bugfix version remove all release files for previous bugfix releases 
in the same series, e.g., when releasing 1.7.1, remove the 1.7.0 release.
# If you have not already, check out the Flink section of the {{release}} 
repository on {{[dist.apache.org|http://dist.apache.org/]}} via Subversion. In 
a fresh directory:
{code}
svn checkout https://dist.apache.org/repos/dist/release/flink --depth=immediates
cd flink
{code}
# Remove files for outdated releases and commit the changes.
{code}
svn remove flink-
svn commit
{code}
# Verify that files  are 
[removed|https://dist.apache.org/repos/dist/release/flink]
(!) Remember to remove the corresponding download links from the website.

h4. CI

Disable the cron job for the now-unsupported version from 
(tools/azure-pipelines/[build-apache-repo.yml|https://github.com/apache/flink/blob/master/tools/azure-pipelines/build-apache-repo.yml])
 in the respective branch.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35961) CLONE - Build 1.19 docs in GitHub Action and mark 1.19 as stable in docs

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35961:
--

 Summary: CLONE - Build 1.19 docs in GitHub Action and mark 1.19 as 
stable in docs
 Key: FLINK-35961
 URL: https://issues.apache.org/jira/browse/FLINK-35961
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35953) CLONE - Update japicmp configuration

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35953:
--

 Summary: CLONE - Update japicmp configuration
 Key: FLINK-35953
 URL: https://issues.apache.org/jira/browse/FLINK-35953
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee
 Fix For: 1.20.0, 1.19.1


Update the japicmp reference version and wipe exclusions / enable API 
compatibility checks for {{@PublicEvolving}} APIs on the corresponding SNAPSHOT 
branch with the {{update_japicmp_configuration.sh}} script (see below).

For a new major release (x.y.0), run the same command also on the master branch 
for updating the japicmp reference version and removing out-dated exclusions in 
the japicmp configuration.

Make sure that all Maven artifacts are already pushed to Maven Central. 
Otherwise, there's a risk that CI fails due to missing reference artifacts.
{code:bash}
tools $ NEW_VERSION=$RELEASE_VERSION releasing/update_japicmp_configuration.sh
tools $ cd ..$ git add *$ git commit -m "Update japicmp configuration for 
$RELEASE_VERSION" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35956) CLONE - Apache mailing lists announcements

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35956:
--

 Summary: CLONE - Apache mailing lists announcements
 Key: FLINK-35956
 URL: https://issues.apache.org/jira/browse/FLINK-35956
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Announce on the {{dev@}} mailing list that the release has been finished.

Announce on the release on the {{user@}} mailing list, listing major 
improvements and contributions.

Announce the release on the [annou...@apache.org|mailto:annou...@apache.org] 
mailing list.
{panel}
{panel}
|{{From: Release Manager}}
{{To: dev@flink.apache.org, u...@flink.apache.org, user...@flink.apache.org, 
annou...@apache.org}}
{{Subject: [ANNOUNCE] Apache Flink 1.2.3 released}}
 
{{The Apache Flink community is very happy to announce the release of Apache 
Flink 1.2.3, which is the third bugfix release for the Apache Flink 1.2 
series.}}
 
{{Apache Flink® is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.}}
 
{{The release is available for download at:}}
{{[https://flink.apache.org/downloads.html]}}
 
{{Please check out the release blog post for an overview of the improvements 
for this bugfix release:}}
{{}}
 
{{The full release notes are available in Jira:}}
{{}}
 
{{We would like to thank all contributors of the Apache Flink community who 
made this release possible!}}
 
{{Feel free to reach out to the release managers (or respond to this thread) 
with feedback on the release process. Our goal is to constantly improve the 
release process. Feedback on what could be improved or things that didn't go so 
well are appreciated.}}
 
{{Regards,}}
{{Release Manager}}|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35952) Promote release 1.20

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35952:
--

 Summary: Promote release 1.20
 Key: FLINK-35952
 URL: https://issues.apache.org/jira/browse/FLINK-35952
 Project: Flink
  Issue Type: New Feature
Affects Versions: 1.18.0
Reporter: Weijie Guo
Assignee: lincoln lee


Once the release has been finalized (FLINK-32920), the last step of the process 
is to promote the release within the project and beyond. Please wait for 24h 
after finalizing the release in accordance with the [ASF release 
policy|http://www.apache.org/legal/release-policy.html#release-announcements].

*Final checklist to declare this issue resolved:*
 # Website pull request to [list the 
release|http://flink.apache.org/downloads.html] merged
 # Release announced on the user@ mailing list.
 # Blog post published, if applicable.
 # Release recorded in 
[reporter.apache.org|https://reporter.apache.org/addrelease.html?flink].
 # Release announced on social media.
 # Completion declared on the dev@ mailing list.
 # Update Homebrew: [https://docs.brew.sh/How-To-Open-a-Homebrew-Pull-Request] 
(seems to be done automatically - at least for minor releases  for both minor 
and major releases)
 # Updated the japicmp configuration
 ** corresponding SNAPSHOT branch japicmp reference version set to the just 
released version, and API compatibiltity checks for {{@PublicEvolving}}  was 
enabled
 ** (minor version release only) master branch japicmp reference version set to 
the just released version
 ** (minor version release only) master branch japicmp exclusions have been 
cleared
 # Update the list of previous version in {{docs/config.toml}} on the master 
branch.
 # Set {{show_outdated_warning: true}} in {{docs/config.toml}} in the branch of 
the _now deprecated_ Flink version (i.e. 1.16 if 1.18.0 is released)
 # Update stable and master alias in 
[https://github.com/apache/flink/blob/master/.github/workflows/docs.yml]
 # Open discussion thread for End of Life for Unsupported version (i.e. 1.16)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35950) CLONE - Publish the Dockerfiles for the new release

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35950:
--

 Summary: CLONE - Publish the Dockerfiles for the new release
 Key: FLINK-35950
 URL: https://issues.apache.org/jira/browse/FLINK-35950
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Note: the official Dockerfiles fetch the binary distribution of the target 
Flink version from an Apache mirror. After publishing the binary release 
artifacts, mirrors can take some hours to start serving the new artifacts, so 
you may want to wait to do this step until you are ready to continue with the 
"Promote the release" steps in the follow-up Jira.

Follow the [release instructions in the flink-docker 
repo|https://github.com/apache/flink-docker#release-workflow] to build the new 
Dockerfiles and send an updated manifest to Docker Hub so the new images are 
built and published.

 

h3. Expectations

 * Dockerfiles in [flink-docker|https://github.com/apache/flink-docker] updated 
for the new Flink release and pull request opened on the Docker official-images 
with an updated manifest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35948) CLONE - Deploy artifacts to Maven Central Repository

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35948:
--

 Summary: CLONE - Deploy artifacts to Maven Central Repository
 Key: FLINK-35948
 URL: https://issues.apache.org/jira/browse/FLINK-35948
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Use the [Apache Nexus repository|https://repository.apache.org/] to release the 
staged binary artifacts to the Maven Central repository. In the Staging 
Repositories section, find the relevant release candidate orgapacheflink-XXX 
entry and click Release. Drop all other release candidates that are not being 
released.
h3. Deploy source and binary releases to dist.apache.org

Copy the source and binary releases from the dev repository to the release 
repository at [dist.apache.org|http://dist.apache.org/] using Subversion.
{code:java}
$ svn move -m "Release Flink ${RELEASE_VERSION}" 
https://dist.apache.org/repos/dist/dev/flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
 https://dist.apache.org/repos/dist/release/flink/flink-${RELEASE_VERSION}
{code}
(Note: Only PMC members have access to the release repository. If you do not 
have access, ask on the mailing list for assistance.)
h3. Remove old release candidates from [dist.apache.org|http://dist.apache.org/]

Remove the old release candidates from 
[https://dist.apache.org/repos/dist/dev/flink] using Subversion.
{code:java}
$ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
$ cd flink
$ svn remove flink-${RELEASE_VERSION}-rc*
$ svn commit -m "Remove old release candidates for Apache Flink 
${RELEASE_VERSION}
{code}
 

h3. Expectations
 * Maven artifacts released and indexed in the [Maven Central 
Repository|https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.flink%22]
 (usually takes about a day to show up)
 * Source & binary distributions available in the release repository of 
[https://dist.apache.org/repos/dist/release/flink/]
 * Dev repository [https://dist.apache.org/repos/dist/dev/flink/] is empty
 * Website contains links to new release binaries and sources in download page
 * (for minor version updates) the front page references the correct new major 
release version and directs to the correct link



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35949) CLONE - Create Git tag and mark version as released in Jira

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35949:
--

 Summary: CLONE - Create Git tag and mark version as released in 
Jira
 Key: FLINK-35949
 URL: https://issues.apache.org/jira/browse/FLINK-35949
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Create and push a new Git tag for the released version by copying the tag for 
the final release candidate, as follows:
{code:java}
$ git tag -s "release-${RELEASE_VERSION}" refs/tags/${TAG}^{} -m "Release Flink 
${RELEASE_VERSION}"
$ git push  refs/tags/release-${RELEASE_VERSION}
{code}
In JIRA, inside [version 
management|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions],
 hover over the current release and a settings menu will appear. Click Release, 
and select today’s date.

(Note: Only PMC members have access to the project administration. If you do 
not have access, ask on the mailing list for assistance.)

If PRs have been merged to the release branch after the the last release 
candidate was tagged, make sure that the corresponding Jira tickets have the 
correct Fix Version set.

 

h3. Expectations
 * Release tagged in the source code repository
 * Release version finalized in JIRA. (Note: Not all committers have 
administrator access to JIRA. If you end up getting permissions errors ask on 
the mailing list for assistance)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35947) CLONE - Deploy Python artifacts to PyPI

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35947:
--

 Summary: CLONE - Deploy Python artifacts to PyPI
 Key: FLINK-35947
 URL: https://issues.apache.org/jira/browse/FLINK-35947
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


Release manager should create a PyPI account and ask the PMC add this account 
to pyflink collaborator list with Maintainer role (The PyPI admin account info 
can be found here. NOTE, only visible to PMC members) to deploy the Python 
artifacts to PyPI. The artifacts could be uploaded using 
twine([https://pypi.org/project/twine/]). To install twine, just run:
{code:java}
pip install --upgrade twine==1.12.0
{code}
Download the python artifacts from dist.apache.org and upload it to pypi.org:
{code:java}
svn checkout 
https://dist.apache.org/repos/dist/dev/flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
cd flink-${RELEASE_VERSION}-rc${RC_NUM}
 
cd python
 
#uploads wheels
for f in *.whl; do twine upload --repository-url 
https://upload.pypi.org/legacy/ $f $f.asc; done
 
#upload source packages
twine upload --repository-url https://upload.pypi.org/legacy/ 
apache-flink-libraries-${RELEASE_VERSION}.tar.gz 
apache-flink-libraries-${RELEASE_VERSION}.tar.gz.asc
 
twine upload --repository-url https://upload.pypi.org/legacy/ 
apache-flink-${RELEASE_VERSION}.tar.gz 
apache-flink-${RELEASE_VERSION}.tar.gz.asc
{code}
If upload failed or incorrect for some reason (e.g. network transmission 
problem), you need to delete the uploaded release package of the same version 
(if exists) and rename the artifact to 
\{{{}apache-flink-${RELEASE_VERSION}.post0.tar.gz{}}}, then re-upload.

(!) Note: re-uploading to pypi.org must be avoided as much as possible because 
it will cause some irreparable problems. If that happens, users cannot install 
the apache-flink package by explicitly specifying the package version, i.e. the 
following command "pip install apache-flink==${RELEASE_VERSION}" will fail. 
Instead they have to run "pip install apache-flink" or "pip install 
apache-flink==${RELEASE_VERSION}.post0" to install the apache-flink package.

 

h3. Expectations
 * Python artifacts released and indexed in the 
[PyPI|https://pypi.org/project/apache-flink/] Repository



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35946) Finalize release 1.20.0

2024-08-01 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35946:
--

 Summary: Finalize release 1.20.0
 Key: FLINK-35946
 URL: https://issues.apache.org/jira/browse/FLINK-35946
 Project: Flink
  Issue Type: New Feature
Reporter: Weijie Guo
Assignee: lincoln lee
 Fix For: 1.19.0


Once the release candidate has been reviewed and approved by the community, the 
release should be finalized. This involves the final deployment of the release 
candidate to the release repositories, merging of the website changes, etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[RESULT][VOTE] Release 1.20.0, release candidate #2

2024-07-31 Thread weijie guo
Hi everyone,


I'm happy to announce that we have unanimously approved this release.


There are 7 approving votes, 5 of which are binding:


- Rui Fan(binding)

- Ahmed Hamdy(non-binding)

- Samrat Deb(non-binding)

- Jing Ge(binding)

- Xintong Song (binding)

- Qingsheng Ren(binding)

- Leonard Xu(binding)


There are no disapproving votes.


Thank you for verifying the release candidate. We will now proceed

to finalize the release and announce it once everything is published.



Best,

Robert, Rui, Ufuk, Weijie


Re: [VOTE] FLIP-469: Supports Adaptive Optimization of StreamGraph

2024-07-28 Thread weijie guo
+1(binding)

Best regards,

Weijie


Junrui Lee  于2024年7月29日周一 09:38写道:

> Hi everyone,
>
> Thanks for all the feedback about FLIP-469: Supports Adaptive Optimization
> of StreamGraph [1]. The discussion thread can be found here [2].
>
> The vote will be open for at least 72 hours unless there are any objections
> or insufficient votes.
>
> Best,
>
> Junrui
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-469%3A+Supports+Adaptive+Optimization+of+StreamGraph
>
> [2] https://lists.apache.org/thread/zs7sqpzvcvdb9y42ym6ndtn1fn7m2592
>


[VOTE] Release 1.20.0, release candidate #2

2024-07-25 Thread weijie guo
Hi everyone,


Please review and vote on the release candidate #2 for the version 1.20.0,

as follows:


[ ] +1, Approve the release

[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:

* JIRA release notes [1], and the pull request adding release note for
users [2]

* the official Apache source release and binary convenience releases to be

deployed to dist.apache.org [3], which are signed with the key with

fingerprint B2D64016B940A7E0B9B72E0D7D0528B28037D8BC  [4],

* all artifacts to be deployed to the Maven Central Repository [5],

* source code tag "release-1.20.0-rc2" [6],

* website pull request listing the new release and adding announcement blog

post [7].


The vote will be open for at least 72 hours. It is adopted by majority

approval, with at least 3 PMC affirmative votes.


[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354210

[2] https://github.com/apache/flink/pull/25091

[3] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc2/

[4] https://dist.apache.org/repos/dist/release/flink/KEYS

[5] https://repository.apache.org/content/repositories/orgapacheflink-1752/

[6] https://github.com/apache/flink/releases/tag/release-1.20.0-rc2

[7] https://github.com/apache/flink-web/pull/751


Best,

Robert, Rui, Ufuk, Weijie


[jira] [Created] (FLINK-35882) CLONE - [Release-1.20] Vote on the release candidate

2024-07-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35882:
--

 Summary: CLONE - [Release-1.20] Vote on the release candidate
 Key: FLINK-35882
 URL: https://issues.apache.org/jira/browse/FLINK-35882
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 1.17.0


Once you have built and individually reviewed the release candidate, please 
share it for the community-wide review. Please review foundation-wide [voting 
guidelines|http://www.apache.org/foundation/voting.html] for more information.

Start the review-and-vote thread on the dev@ mailing list. Here’s an email 
template; please adjust as you see fit.
{quote}From: Release Manager
To: dev@flink.apache.org
Subject: [VOTE] Release 1.2.3, release candidate #3

Hi everyone,
Please review and vote on the release candidate #3 for the version 1.2.3, as 
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience releases to be 
deployed to dist.apache.org [2], which are signed with the key with fingerprint 
 [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag "release-1.2.3-rc3" [5],
 * website pull request listing the new release and adding announcement blog 
post [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.

Thanks,
Release Manager

[1] link
[2] link
[3] [https://dist.apache.org/repos/dist/release/flink/KEYS]
[4] link
[5] link
[6] link
{quote}
*If there are any issues found in the release candidate, reply on the vote 
thread to cancel the vote.* There’s no need to wait 72 hours. Proceed to the 
Fix Issues step below and address the problem. However, some issues don’t 
require cancellation. For example, if an issue is found in the website pull 
request, just correct it on the spot and the vote can continue as-is.

For cancelling a release, the release manager needs to send an email to the 
release candidate thread, stating that the release candidate is officially 
cancelled. Next, all artifacts created specifically for the RC in the previous 
steps need to be removed:
 * Delete the staging repository in Nexus
 * Remove the source / binary RC files from dist.apache.org
 * Delete the source code tag in git

*If there are no issues, reply on the vote thread to close the voting.* Then, 
tally the votes in a separate email. Here’s an email template; please adjust as 
you see fit.
{quote}From: Release Manager
To: dev@flink.apache.org
Subject: [RESULT] [VOTE] Release 1.2.3, release candidate #3

I'm happy to announce that we have unanimously approved this release.

There are XXX approving votes, XXX of which are binding:
 * approver 1
 * approver 2
 * approver 3
 * approver 4

There are no disapproving votes.

Thanks everyone!
{quote}
 

h3. Expectations
 * Community votes to release the proposed candidate, with at least three 
approving PMC votes

Any issues that are raised till the vote is over should be either resolved or 
moved into the next release (if applicable).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35880) CLONE - [Release-1.20] Stage source and binary releases on dist.apache.org

2024-07-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35880:
--

 Summary: CLONE - [Release-1.20] Stage source and binary releases 
on dist.apache.org
 Key: FLINK-35880
 URL: https://issues.apache.org/jira/browse/FLINK-35880
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.20.0
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 1.20.0


Copy the source release to the dev repository of dist.apache.org:
# If you have not already, check out the Flink section of the dev repository on 
dist.apache.org via Subversion. In a fresh directory:
{code:bash}
$ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
{code}
# Make a directory for the new release and copy all the artifacts (Flink 
source/binary distributions, hashes, GPG signatures and the python 
subdirectory) into that newly created directory:
{code:bash}
$ mkdir flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
$ mv /tools/releasing/release/* 
flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
{code}
# Add and commit all the files.
{code:bash}
$ cd flink
flink $ svn add flink-${RELEASE_VERSION}-rc${RC_NUM}
flink $ svn commit -m "Add flink-${RELEASE_VERSION}-rc${RC_NUM}"
{code}
# Verify that files are present under 
[https://dist.apache.org/repos/dist/dev/flink|https://dist.apache.org/repos/dist/dev/flink].
# Push the release tag if not done already (the following command assumes to be 
called from within the apache/flink checkout):
{code:bash}
$ git push  refs/tags/release-${RELEASE_VERSION}-rc${RC_NUM}
{code}

 

h3. Expectations
 * Maven artifacts deployed to the staging repository of 
[repository.apache.org|https://repository.apache.org/content/repositories/]
 * Source distribution deployed to the dev repository of 
[dist.apache.org|https://dist.apache.org/repos/dist/dev/flink/]
 * Check hashes (e.g. shasum -c *.sha512)
 * Check signatures (e.g. {{{}gpg --verify 
flink-1.2.3-source-release.tar.gz.asc flink-1.2.3-source-release.tar.gz{}}})
 * {{grep}} for legal headers in each file.
 * If time allows check the NOTICE files of the modules whose dependencies have 
been changed in this release in advance, since the license issues from time to 
time pop up during voting. See [Verifying a Flink 
Release|https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release]
 "Checking License" section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35881) CLONE - [Release-1.20] Propose a pull request for website updates

2024-07-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35881:
--

 Summary: CLONE - [Release-1.20] Propose a pull request for website 
updates
 Key: FLINK-35881
 URL: https://issues.apache.org/jira/browse/FLINK-35881
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 1.20.0


The final step of building the candidate is to propose a website pull request 
containing the following changes:
 # update 
[apache/flink-web:_config.yml|https://github.com/apache/flink-web/blob/asf-site/_config.yml]
 ## update {{FLINK_VERSION_STABLE}} and {{FLINK_VERSION_STABLE_SHORT}} as 
required
 ## update version references in quickstarts ({{{}q/{}}} directory) as required
 ## (major only) add a new entry to {{flink_releases}} for the release binaries 
and sources
 ## (minor only) update the entry for the previous release in the series in 
{{flink_releases}}
 ### Please pay notice to the ids assigned to the download entries. They should 
be unique and reflect their corresponding version number.
 ## add a new entry to {{release_archive.flink}}
 # add a blog post announcing the release in _posts
 # add a organized release notes page under docs/content/release-notes and 
docs/content.zh/release-notes (like 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.15/]).
 The page is based on the non-empty release notes collected from the issues, 
and only the issues that affect existing users should be included (e.g., 
instead of new functionality). It should be in a separate PR since it would be 
merged to the flink project.

(!) Don’t merge the PRs before finalizing the release.

 

h3. Expectations
 * Website pull request proposed to list the 
[release|http://flink.apache.org/downloads.html]
 * (major only) Check {{docs/config.toml}} to ensure that
 ** the version constants refer to the new version
 ** the {{baseurl}} does not point to {{flink-docs-master}}  but 
{{flink-docs-release-X.Y}} instead



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35879) CLONE - [Release-1.20] Build and stage Java and Python artifacts

2024-07-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35879:
--

 Summary: CLONE - [Release-1.20] Build and stage Java and Python 
artifacts
 Key: FLINK-35879
 URL: https://issues.apache.org/jira/browse/FLINK-35879
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.20.0
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 1.20.0


# Create a local release branch ((!) this step can not be skipped for minor 
releases):
{code:bash}
$ cd ./tools
tools/ $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION NEW_VERSION=$RELEASE_VERSION 
RELEASE_CANDIDATE=$RC_NUM releasing/create_release_branch.sh
{code}
 # Tag the release commit:
{code:bash}
$ git tag -s ${TAG} -m "${TAG}"
{code}
 # We now need to do several things:
 ## Create the source release archive
 ## Deploy jar artefacts to the [Apache Nexus 
Repository|https://repository.apache.org/], which is the staging area for 
deploying the jars to Maven Central
 ## Build PyFlink wheel packages
You might want to create a directory on your local machine for collecting the 
various source and binary releases before uploading them. Creating the binary 
releases is a lengthy process but you can do this on another machine (for 
example, in the "cloud"). When doing this, you can skip signing the release 
files on the remote machine, download them to your local machine and sign them 
there.
 # Build the source release:
{code:bash}
tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_source_release.sh
{code}
 # Stage the maven artifacts:
{code:bash}
tools $ releasing/deploy_staging_jars.sh
{code}
Review all staged artifacts ([https://repository.apache.org/]). They should 
contain all relevant parts for each module, including pom.xml, jar, test jar, 
source, test source, javadoc, etc. Carefully review any new artifacts.
 # Close the staging repository on Apache Nexus. When prompted for a 
description, enter “Apache Flink, version X, release candidate Y”.
Then, you need to build the PyFlink wheel packages (since 1.11):
 # Set up an azure pipeline in your own Azure account. You can refer to [Azure 
Pipelines|https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository]
 for more details on how to set up azure pipeline for a fork of the Flink 
repository. Note that a google cloud mirror in Europe is used for downloading 
maven artifacts, therefore it is recommended to set your [Azure organization 
region|https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location]
 to Europe to speed up the downloads.
 # Push the release candidate branch to your forked personal Flink repository, 
e.g.
{code:bash}
tools $ git push  
refs/heads/release-${RELEASE_VERSION}-rc${RC_NUM}:release-${RELEASE_VERSION}-rc${RC_NUM}
{code}
 # Trigger the Azure Pipelines manually to build the PyFlink wheel packages
 ## Go to your Azure Pipelines Flink project → Pipelines
 ## Click the "New pipeline" button on the top right
 ## Select "GitHub" → your GitHub Flink repository → "Existing Azure Pipelines 
YAML file"
 ## Select your branch → Set path to "/azure-pipelines.yaml" → click on 
"Continue" → click on "Variables"
 ## Then click "New Variable" button, fill the name with "MODE", and the value 
with "release". Click "OK" to set the variable and the "Save" button to save 
the variables, then back on the "Review your pipeline" screen click "Run" to 
trigger the build.
 ## You should now see a build where only the "CI build (release)" is running
 # Download the PyFlink wheel packages from the build result page after the 
jobs of "build_wheels mac" and "build_wheels linux" have finished.
 ## Download the PyFlink wheel packages
 ### Open the build result page of the pipeline
 ### Go to the {{Artifacts}} page (build_wheels linux -> 1 artifact)
 ### Click {{wheel_Darwin_build_wheels mac}} and {{wheel_Linux_build_wheels 
linux}} separately to download the zip files
 ## Unzip these two zip files
{code:bash}
$ cd /path/to/downloaded_wheel_packages
$ unzip wheel_Linux_build_wheels\ linux.zip
$ unzip wheel_Darwin_build_wheels\ mac.zip{code}
 ## Create directory {{./dist}} under the directory of {{{}flink-python{}}}:
{code:bash}
$ cd 
$ mkdir flink-python/dist{code}
 ## Move the unzipped wheel packages to the directory of 
{{{}flink-python/dist{}}}:
{code:java}
$ mv /path/to/wheel_Darwin_build_wheels\ mac/* flink-python/dist/
$ mv /path/to/wheel_Linux_build_wheels\ linux/* flink-python/dist/
$ cd tools{code}

Finally, we create the binary convenience release files:
{code:bash}
tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_binary_release.sh
{code}
If you want to run this step in parallel on a remote 

[jira] [Created] (FLINK-35878) Build Release Candidate: 1.20.0-rc2

2024-07-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35878:
--

 Summary: Build Release Candidate: 1.20.0-rc2
 Key: FLINK-35878
 URL: https://issues.apache.org/jira/browse/FLINK-35878
 Project: Flink
  Issue Type: New Feature
Affects Versions: 1.20.0
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 1.20.0


The core of the release process is the build-vote-fix cycle. Each cycle 
produces one release candidate. The Release Manager repeats this cycle until 
the community approves one release candidate, which is then finalized.

h4. Prerequisites
Set up a few environment variables to simplify Maven commands that follow. This 
identifies the release candidate being built. Start with {{RC_NUM}} equal to 1 
and increment it for each candidate:
{code}
RC_NUM="1"
TAG="release-${RELEASE_VERSION}-rc${RC_NUM}"
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release 1.20.0, release candidate #1

2024-07-22 Thread weijie guo
Hi everyone,

This rc is officially cancelled because of the critical bug of materialized
table [1]

A new rc will be created recently.

[1] https://issues.apache.org/jira/browse/FLINK-35872

Best,

Robert, Rui, Ufuk, Weijie



Ron Liu  于2024年7月23日周二 09:34写道:

> Hi, Weijie
>
> Sorry about the newly discovered bug affecting the release process.
>
> The fix pr of https://issues.apache.org/jira/browse/FLINK-35872 has been
> merged.
>
> Best,
> Ron
>
> Feng Jin  于2024年7月22日周一 10:57写道:
>
> > Hi, weijie
> >
> > -1 (non-binding)
> >
> > During our testing process, we discovered a critical bug that impacts the
> > correctness of the materialized table.
> > A fix pr [1] is now prepared and will be merged within the next two days.
> >
> > I apologize for any inconvenience during the release process.
> >
> >
> > [1]. https://issues.apache.org/jira/browse/FLINK-35872
> >
> >
> > Best,
> >
> > Feng
> >
> >
> > On Fri, Jul 19, 2024 at 5:45 PM Xintong Song 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > - reviewed flink-web PR
> > > - verified checksum and signature
> > > - verified source archives don't contain binaries
> > > - built from source
> > > - tried example jobs on a standalone cluster, and everything looks fine
> > >
> > > Best,
> > >
> > > Xintong
> > >
> > >
> > >
> > > On Thu, Jul 18, 2024 at 4:25 PM Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > +1(binding)
> > > >
> > > > - Reviewed the flink-web PR (Left some comments)
> > > > - Checked Github release tag
> > > > - Verified signatures
> > > > - Verified sha512 (hashsums)
> > > > - The source archives don't contain any binaries
> > > > - Build the source with Maven 3 and java8 (Checked the license as
> well)
> > > > - Start the cluster locally with jdk8, and run the
> StateMachineExample
> > > job,
> > > > it works fine.
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Mon, Jul 15, 2024 at 10:59 PM weijie guo <
> guoweijieres...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > >
> > > > > Please review and vote on the release candidate #1 for the version
> > > > 1.20.0,
> > > > >
> > > > > as follows:
> > > > >
> > > > >
> > > > > [ ] +1, Approve the release
> > > > >
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > >
> > > > > The complete staging area is available for your review, which
> > includes:
> > > > >
> > > > > * JIRA release notes [1], and the pull request adding release note
> > for
> > > > >
> > > > > users [2]
> > > > >
> > > > > * the official Apache source release and binary convenience
> releases
> > to
> > > > be
> > > > >
> > > > > deployed to dist.apache.org [3], which are signed with the key
> with
> > > > >
> > > > > fingerprint 8D56AE6E7082699A4870750EA4E8C4C05EE6861F  [4],
> > > > >
> > > > > * all artifacts to be deployed to the Maven Central Repository [5],
> > > > >
> > > > > * source code tag "release-1.20.0-rc1" [6],
> > > > >
> > > > > * website pull request listing the new release and adding
> > announcement
> > > > blog
> > > > >
> > > > > post [7].
> > > > >
> > > > >
> > > > > The vote will be open for at least 72 hours. It is adopted by
> > majority
> > > > >
> > > > > approval, with at least 3 PMC affirmative votes.
> > > > >
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354210
> > > > >
> > > > > [2] https://github.com/apache/flink/pull/25091
> > > > >
> > > > > [3] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc1/
> > > > >
> > > > > [4] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > >
> > > > > [5]
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1744/
> > > > >
> > > > > [6]
> https://github.com/apache/flink/releases/tag/release-1.20.0-rc1
> > > > >
> > > > > [7] https://github.com/apache/flink-web/pull/751
> > > > >
> > > > >
> > > > > Best,
> > > > >
> > > > > Robert, Rui, Ufuk, Weijie
> > > > >
> > > >
> > >
> >
>


[jira] [Created] (FLINK-35860) S5CmdOnMinioITCase failed due to IllegalAccessError

2024-07-17 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35860:
--

 Summary: S5CmdOnMinioITCase failed due to IllegalAccessError
 Key: FLINK-35860
 URL: https://issues.apache.org/jira/browse/FLINK-35860
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 2.0.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-466: Introduce ProcessFunction Attribute in DataStream API V2

2024-07-17 Thread weijie guo
+1(binding)

Best regards,

Weijie


Wencong Liu  于2024年7月17日周三 21:31写道:

> Hi dev,
>
> I'd like to start a vote on FLIP-466.
>
> Discussion thread:
> https://lists.apache.org/thread/sw2or62299w0hw9jm5kdqjqj3j8rnrdt
> FLIP:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-466%3A+Introduce+ProcessFunction+Attribute+in+DataStream+API+V2
>
> Best regards,
> Wencong Liu


Re: [DISCUSS] FLIP-469: Supports Adaptive Optimization of StreamGraph

2024-07-17 Thread weijie guo
Thanks for the proposal!

I like this idea as it gives Flink's adaptive batching processing more room
to imagine and optimize.

So, +1 from my side.

I just have two questions:

1. `StreamGraphOptimizationStrategy` is a reasonable abstract, I'd like to
know what built-in strategy implementations you have in mind so far?

2. For the so-called pending operators, can we show it in different colors
on the UI.


Best regards,

Weijie


Zhu Zhu  于2024年7月17日周三 16:49写道:

> Thanks Junrui for the updates. The proposal looks good to me.
> With the stream graph added to the REST API result, I think we are
> also quite close to enable Flink to expand a job vertex to show its
> operator-chain topology.
>
> Thanks,
> Zhu
>
> Junrui Lee  于2024年7月15日周一 14:58写道:
>
> > Hi Zhu,
> >
> > Thanks for your feedback.
> >
> > Following your suggestion, I have updated the public interface section of
> > the FLIP with the following additions:
> >
> > 1. UI:
> > The job topology will display a hybrid of the current JobGraph along with
> > downstream components yet to be converted to a StreamGraph. On the
> topology
> > graph display page, there will be a "Show Pending Operators" button in
> the
> > upper right corner for users to switch back to a job topology that only
> > includes JobVertices.
> >
> > 2. Rest API:
> > Add a new field "stream-graph-plan" will be added to the job details REST
> > API, which represents the runtime Stream graph. The field "job-vertex-id"
> > is valid only when the StreamNode has been converted to a JobVertex, and
> it
> > will hold the ID of the corresponding JobVertex for that StreamNode.
> >
> > For further information, please feel free to review the public interface
> > section of FLIP-469
> >
> > Best,
> > Junrui
> >
> > Zhu Zhu  于2024年7月15日周一 10:29写道:
> >
> > > +1 for the FLIP
> > >
> > > It is useful to adaptively optimize logical execution plans(stream
> > > operators and
> > > stream edges) for batch jobs.
> > >
> > > One question:
> > > The FLIP already proposed to update the REST API & Web UI to show
> > operators
> > > that are not yet converted to job vertices. However, I think it would
> be
> > > better if Flink can display these operators as part of the graph,
> > allowing
> > > users to have an overview of the processing logic graph at early stages
> > of
> > > the job execution.
> > > This change would also involve the public interface, so instead of
> > > postponing
> > > it to a later FLIP, I prefer to have a design for it in this FLIP.
> WDYT?
> > >
> > > Thanks,
> > > Zhu
> > >
> > > Junrui Lee  于2024年7月11日周四 11:27写道:
> > >
> > > > Hi devs,
> > > >
> > > > Xia Sun, Lei Yang, and I would like to initiate a discussion about
> > > > FLIP-469: Supports Adaptive Optimization of StreamGraph.
> > > >
> > > > This FLIP is the second in the series on adaptive optimization of
> > > > StreamGraph and follows up on FLIP-468 [1]. As we proposed in
> FLIP-468
> > to
> > > > enable the scheduler to recognize and access the StreamGraph, in this
> > > FLIP,
> > > > we will propose a mechanism for adaptive optimization of StreamGraph.
> > It
> > > > allows the scheduler to dynamically adjust the logical execution plan
> > at
> > > > runtime. This mechanism is the base of adaptive optimization
> > strategies,
> > > > such as adaptive broadcast join and skewed join optimization.
> > > >
> > > > Unlike the traditional execution of jobs based on a static
> StreamGraph,
> > > > this mechanism will progressively determine StreamGraph during
> runtime.
> > > The
> > > > determined StreamGraph will be transformed into a specific JobGraph,
> > > while
> > > > the indeterminate part will allow Flink to flexibly adjust according
> to
> > > > real-time job status and actual input conditions.
> > > >
> > > > Note that this FLIP focuses on the introduction of the mechanism and
> > does
> > > > not introduce any actual optimization strategies; these will be
> > discussed
> > > > in subsequent FLIPs.
> > > >
> > > > For more details, please refer to FLIP-469 [2]. We look forward to
> your
> > > > feedback.
> > > >
> > > > Best,
> > > >
> > > > Xia Sun, Lei Yang and Junrui Lee
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-468%3A+Introducing+StreamGraph-Based+Job+Submission
> > > > [2]
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-469%3A+Supports+Adaptive+Optimization+of+StreamGraph
> > > >
> > >
> >
>


[VOTE] Release 1.20.0, release candidate #1

2024-07-15 Thread weijie guo
Hi everyone,


Please review and vote on the release candidate #1 for the version 1.20.0,

as follows:


[ ] +1, Approve the release

[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:

* JIRA release notes [1], and the pull request adding release note for

users [2]

* the official Apache source release and binary convenience releases to be

deployed to dist.apache.org [3], which are signed with the key with

fingerprint 8D56AE6E7082699A4870750EA4E8C4C05EE6861F  [4],

* all artifacts to be deployed to the Maven Central Repository [5],

* source code tag "release-1.20.0-rc1" [6],

* website pull request listing the new release and adding announcement blog

post [7].


The vote will be open for at least 72 hours. It is adopted by majority

approval, with at least 3 PMC affirmative votes.


[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354210

[2] https://github.com/apache/flink/pull/25091

[3] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc1/

[4] https://dist.apache.org/repos/dist/release/flink/KEYS

[5] https://repository.apache.org/content/repositories/orgapacheflink-1744/

[6] https://github.com/apache/flink/releases/tag/release-1.20.0-rc1

[7] https://github.com/apache/flink-web/pull/751


Best,

Robert, Rui, Ufuk, Weijie


[jira] [Created] (FLINK-35845) CLONE - Build and stage Java and Python artifacts

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35845:
--

 Summary: CLONE - Build and stage Java and Python artifacts
 Key: FLINK-35845
 URL: https://issues.apache.org/jira/browse/FLINK-35845
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: Qingsheng Ren


# Create a local release branch ((!) this step can not be skipped for minor 
releases):
{code:bash}
$ cd ./tools
tools/ $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION NEW_VERSION=$RELEASE_VERSION 
RELEASE_CANDIDATE=$RC_NUM releasing/create_release_branch.sh
{code}
 # Tag the release commit:
{code:bash}
$ git tag -s ${TAG} -m "${TAG}"
{code}
 # We now need to do several things:
 ## Create the source release archive
 ## Deploy jar artefacts to the [Apache Nexus 
Repository|https://repository.apache.org/], which is the staging area for 
deploying the jars to Maven Central
 ## Build PyFlink wheel packages
You might want to create a directory on your local machine for collecting the 
various source and binary releases before uploading them. Creating the binary 
releases is a lengthy process but you can do this on another machine (for 
example, in the "cloud"). When doing this, you can skip signing the release 
files on the remote machine, download them to your local machine and sign them 
there.
 # Build the source release:
{code:bash}
tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_source_release.sh
{code}
 # Stage the maven artifacts:
{code:bash}
tools $ releasing/deploy_staging_jars.sh
{code}
Review all staged artifacts ([https://repository.apache.org/]). They should 
contain all relevant parts for each module, including pom.xml, jar, test jar, 
source, test source, javadoc, etc. Carefully review any new artifacts.
 # Close the staging repository on Apache Nexus. When prompted for a 
description, enter “Apache Flink, version X, release candidate Y”.
Then, you need to build the PyFlink wheel packages (since 1.11):
 # Set up an azure pipeline in your own Azure account. You can refer to [Azure 
Pipelines|https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository]
 for more details on how to set up azure pipeline for a fork of the Flink 
repository. Note that a google cloud mirror in Europe is used for downloading 
maven artifacts, therefore it is recommended to set your [Azure organization 
region|https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location]
 to Europe to speed up the downloads.
 # Push the release candidate branch to your forked personal Flink repository, 
e.g.
{code:bash}
tools $ git push  
refs/heads/release-${RELEASE_VERSION}-rc${RC_NUM}:release-${RELEASE_VERSION}-rc${RC_NUM}
{code}
 # Trigger the Azure Pipelines manually to build the PyFlink wheel packages
 ## Go to your Azure Pipelines Flink project → Pipelines
 ## Click the "New pipeline" button on the top right
 ## Select "GitHub" → your GitHub Flink repository → "Existing Azure Pipelines 
YAML file"
 ## Select your branch → Set path to "/azure-pipelines.yaml" → click on 
"Continue" → click on "Variables"
 ## Then click "New Variable" button, fill the name with "MODE", and the value 
with "release". Click "OK" to set the variable and the "Save" button to save 
the variables, then back on the "Review your pipeline" screen click "Run" to 
trigger the build.
 ## You should now see a build where only the "CI build (release)" is running
 # Download the PyFlink wheel packages from the build result page after the 
jobs of "build_wheels mac" and "build_wheels linux" have finished.
 ## Download the PyFlink wheel packages
 ### Open the build result page of the pipeline
 ### Go to the {{Artifacts}} page (build_wheels linux -> 1 artifact)
 ### Click {{wheel_Darwin_build_wheels mac}} and {{wheel_Linux_build_wheels 
linux}} separately to download the zip files
 ## Unzip these two zip files
{code:bash}
$ cd /path/to/downloaded_wheel_packages
$ unzip wheel_Linux_build_wheels\ linux.zip
$ unzip wheel_Darwin_build_wheels\ mac.zip{code}
 ## Create directory {{./dist}} under the directory of {{{}flink-python{}}}:
{code:bash}
$ cd 
$ mkdir flink-python/dist{code}
 ## Move the unzipped wheel packages to the directory of 
{{{}flink-python/dist{}}}:
{code:java}
$ mv /path/to/wheel_Darwin_build_wheels\ mac/* flink-python/dist/
$ mv /path/to/wheel_Linux_build_wheels\ linux/* flink-python/dist/
$ cd tools{code}

Finally, we create the binary convenience release files:
{code:bash}
tools $ RELEASE_VERSION=$RELEASE_VERSION releasing/create_binary_release.sh
{code}
If you want to run this step in parallel on a remote machine you have to make 
the release commit available there (for example by p

[jira] [Created] (FLINK-35848) CLONE - Vote on the release candidate

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35848:
--

 Summary: CLONE - Vote on the release candidate
 Key: FLINK-35848
 URL: https://issues.apache.org/jira/browse/FLINK-35848
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: Weijie Guo
Assignee: Qingsheng Ren
 Fix For: 1.17.0


Once you have built and individually reviewed the release candidate, please 
share it for the community-wide review. Please review foundation-wide [voting 
guidelines|http://www.apache.org/foundation/voting.html] for more information.

Start the review-and-vote thread on the dev@ mailing list. Here’s an email 
template; please adjust as you see fit.
{quote}From: Release Manager
To: dev@flink.apache.org
Subject: [VOTE] Release 1.2.3, release candidate #3

Hi everyone,
Please review and vote on the release candidate #3 for the version 1.2.3, as 
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience releases to be 
deployed to dist.apache.org [2], which are signed with the key with fingerprint 
 [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag "release-1.2.3-rc3" [5],
 * website pull request listing the new release and adding announcement blog 
post [6].

The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.

Thanks,
Release Manager

[1] link
[2] link
[3] [https://dist.apache.org/repos/dist/release/flink/KEYS]
[4] link
[5] link
[6] link
{quote}
*If there are any issues found in the release candidate, reply on the vote 
thread to cancel the vote.* There’s no need to wait 72 hours. Proceed to the 
Fix Issues step below and address the problem. However, some issues don’t 
require cancellation. For example, if an issue is found in the website pull 
request, just correct it on the spot and the vote can continue as-is.

For cancelling a release, the release manager needs to send an email to the 
release candidate thread, stating that the release candidate is officially 
cancelled. Next, all artifacts created specifically for the RC in the previous 
steps need to be removed:
 * Delete the staging repository in Nexus
 * Remove the source / binary RC files from dist.apache.org
 * Delete the source code tag in git

*If there are no issues, reply on the vote thread to close the voting.* Then, 
tally the votes in a separate email. Here’s an email template; please adjust as 
you see fit.
{quote}From: Release Manager
To: dev@flink.apache.org
Subject: [RESULT] [VOTE] Release 1.2.3, release candidate #3

I'm happy to announce that we have unanimously approved this release.

There are XXX approving votes, XXX of which are binding:
 * approver 1
 * approver 2
 * approver 3
 * approver 4

There are no disapproving votes.

Thanks everyone!
{quote}
 

h3. Expectations
 * Community votes to release the proposed candidate, with at least three 
approving PMC votes

Any issues that are raised till the vote is over should be either resolved or 
moved into the next release (if applicable).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35847) CLONE - Propose a pull request for website updates

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35847:
--

 Summary: CLONE - Propose a pull request for website updates
 Key: FLINK-35847
 URL: https://issues.apache.org/jira/browse/FLINK-35847
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.17.0
Reporter: Weijie Guo
Assignee: Qingsheng Ren
 Fix For: 1.17.0


The final step of building the candidate is to propose a website pull request 
containing the following changes:
 # update 
[apache/flink-web:_config.yml|https://github.com/apache/flink-web/blob/asf-site/_config.yml]
 ## update {{FLINK_VERSION_STABLE}} and {{FLINK_VERSION_STABLE_SHORT}} as 
required
 ## update version references in quickstarts ({{{}q/{}}} directory) as required
 ## (major only) add a new entry to {{flink_releases}} for the release binaries 
and sources
 ## (minor only) update the entry for the previous release in the series in 
{{flink_releases}}
 ### Please pay notice to the ids assigned to the download entries. They should 
be unique and reflect their corresponding version number.
 ## add a new entry to {{release_archive.flink}}
 # add a blog post announcing the release in _posts
 # add a organized release notes page under docs/content/release-notes and 
docs/content.zh/release-notes (like 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/release-notes/flink-1.15/]).
 The page is based on the non-empty release notes collected from the issues, 
and only the issues that affect existing users should be included (e.g., 
instead of new functionality). It should be in a separate PR since it would be 
merged to the flink project.

(!) Don’t merge the PRs before finalizing the release.

 

h3. Expectations
 * Website pull request proposed to list the 
[release|http://flink.apache.org/downloads.html]
 * (major only) Check {{docs/config.toml}} to ensure that
 ** the version constants refer to the new version
 ** the {{baseurl}} does not point to {{flink-docs-master}}  but 
{{flink-docs-release-X.Y}} instead



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35846) CLONE - Stage source and binary releases on dist.apache.org

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35846:
--

 Summary: CLONE - Stage source and binary releases on 
dist.apache.org
 Key: FLINK-35846
 URL: https://issues.apache.org/jira/browse/FLINK-35846
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: Qingsheng Ren


Copy the source release to the dev repository of dist.apache.org:
# If you have not already, check out the Flink section of the dev repository on 
dist.apache.org via Subversion. In a fresh directory:
{code:bash}
$ svn checkout https://dist.apache.org/repos/dist/dev/flink --depth=immediates
{code}
# Make a directory for the new release and copy all the artifacts (Flink 
source/binary distributions, hashes, GPG signatures and the python 
subdirectory) into that newly created directory:
{code:bash}
$ mkdir flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
$ mv /tools/releasing/release/* 
flink/flink-${RELEASE_VERSION}-rc${RC_NUM}
{code}
# Add and commit all the files.
{code:bash}
$ cd flink
flink $ svn add flink-${RELEASE_VERSION}-rc${RC_NUM}
flink $ svn commit -m "Add flink-${RELEASE_VERSION}-rc${RC_NUM}"
{code}
# Verify that files are present under 
[https://dist.apache.org/repos/dist/dev/flink|https://dist.apache.org/repos/dist/dev/flink].
# Push the release tag if not done already (the following command assumes to be 
called from within the apache/flink checkout):
{code:bash}
$ git push  refs/tags/release-${RELEASE_VERSION}-rc${RC_NUM}
{code}

 

h3. Expectations
 * Maven artifacts deployed to the staging repository of 
[repository.apache.org|https://repository.apache.org/content/repositories/]
 * Source distribution deployed to the dev repository of 
[dist.apache.org|https://dist.apache.org/repos/dist/dev/flink/]
 * Check hashes (e.g. shasum -c *.sha512)
 * Check signatures (e.g. {{{}gpg --verify 
flink-1.2.3-source-release.tar.gz.asc flink-1.2.3-source-release.tar.gz{}}})
 * {{grep}} for legal headers in each file.
 * If time allows check the NOTICE files of the modules whose dependencies have 
been changed in this release in advance, since the license issues from time to 
time pop up during voting. See [Verifying a Flink 
Release|https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release]
 "Checking License" section.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35844) Build Release Candidate: 1.20.0-rc1

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35844:
--

 Summary:  Build Release Candidate: 1.20.0-rc1
 Key: FLINK-35844
 URL: https://issues.apache.org/jira/browse/FLINK-35844
 Project: Flink
  Issue Type: New Feature
Affects Versions: 1.17.0
Reporter: Weijie Guo
Assignee: Jing Ge
 Fix For: 1.17.0


The core of the release process is the build-vote-fix cycle. Each cycle 
produces one release candidate. The Release Manager repeats this cycle until 
the community approves one release candidate, which is then finalized.

h4. Prerequisites
Set up a few environment variables to simplify Maven commands that follow. This 
identifies the release candidate being built. Start with {{RC_NUM}} equal to 1 
and increment it for each candidate:
{code}
RC_NUM="1"
TAG="release-${RELEASE_VERSION}-rc${RC_NUM}"
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35837) InitOutputPathTest.testErrorOccursUnSynchronized failed in JDK21

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35837:
--

 Summary: InitOutputPathTest.testErrorOccursUnSynchronized failed 
in JDK21
 Key: FLINK-35837
 URL: https://issues.apache.org/jira/browse/FLINK-35837
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 2.0.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35836) Some S5 related ITCases failed in JDK17 due to reflection issue

2024-07-15 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35836:
--

 Summary: Some S5 related ITCases failed in JDK17 due to reflection 
issue
 Key: FLINK-35836
 URL: https://issues.apache.org/jira/browse/FLINK-35836
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 2.0.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35835) SnapshotFileMergingCompatibilityITCase.testSwitchFromDisablingToEnablingFileMerging failed on AZP

2024-07-14 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35835:
--

 Summary: 
SnapshotFileMergingCompatibilityITCase.testSwitchFromDisablingToEnablingFileMerging
 failed on AZP
 Key: FLINK-35835
 URL: https://issues.apache.org/jira/browse/FLINK-35835
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35825) HiveTableSource supports report statistics for text file

2024-07-11 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35825:
--

 Summary: HiveTableSource supports report statistics for text file
 Key: FLINK-35825
 URL: https://issues.apache.org/jira/browse/FLINK-35825
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 2.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35801) testSwitchFromEnablingToDisablingFileMerging failed in AZP

2024-07-09 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35801:
--

 Summary: testSwitchFromEnablingToDisablingFileMerging failed in AZP
 Key: FLINK-35801
 URL: https://issues.apache.org/jira/browse/FLINK-35801
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[SUMMARY] Flink 1.20 Release Sync 07/09/2024

2024-07-09 Thread weijie guo
Dear devs,


This is the fourth meeting after feature freeze of Flink 1.20.


I'd like to share the information synced in the meeting.


- *Timeline*


Flink 1.20 doesn't have any blocker for now.


We plan to create and vote the release-1.20.0-rc1 next monday if we're
not aware of any new blocker.


- *Cross-team release testing*


Thanks to all the developers involved, we only left two unclosed
release testing ticket for now:

 -  Verify FLINK-26050 Too many small sst files in rocksdb state
backend when using time window created in ascending order
[1]

 -
Release Testing: Verify FLIP-306 Unified File Merging Mechanism for Checkpoints
[2]


We hope all cross-team release testing can be finished this week.


-*Sync meeting[3]*


The next meeting is 07/16/2024 9:30am (UTC+2) and 3:30pm (UTC+8),

please feel free to join us.


Lastly, we encourage attendees to fill out the topics to be discussed at
the bottom of 1.20 wiki page[4] a day in advance, to make it easier for
everyone to understand the background of the topics, thanks!



[1] https://issues.apache.org/jira/browse/FLINK-35738

[2] https://issues.apache.org/jira/browse/FLINK-35624

[3] https://meet.google.com/mtj-huez-apu

[4] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release



Best,

Robert, Rui, Ufuk, Weijie


[jira] [Created] (FLINK-35793) BatchSQLTest.testBatchSQL failed in hybrid shuffle selective mode

2024-07-09 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35793:
--

 Summary: BatchSQLTest.testBatchSQL failed in hybrid shuffle 
selective mode
 Key: FLINK-35793
 URL: https://issues.apache.org/jira/browse/FLINK-35793
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Release 1.20.0, release candidate #0

2024-07-08 Thread weijie guo
Thanks Santwana and Martijn for the testing!

> I've closed my blocker ticket; after force killing all my Flink Java
processes, it did work when I restarted it.

That would be great!

Best regards,

Weijie


Martijn Visser  于2024年7月9日周二 03:04写道:

> Hi all,
>
> I've closed my blocker ticket; after force killing all my Flink Java
> processes, it did work when I restarted it.
>
> Thanks, Martijn
>
> On Mon, Jul 8, 2024 at 6:13 PM Santwana Verma  >
> wrote:
>
> > Hi everyone,
> >
> > Thanks for creating the release candidate. I've successfully validated
> the
> > release candidate locally with the DataStream API.
> >
> > 1. I created a DataStream API based job, which read and deserialized JSON
> > strings from an input Kafka topic using flink-connector-kafka,
> transformed
> > the data, and wrote it in the Avro format to an output Kafka topic.
> > 2. I used Maven dependencies for the job from the repository
> > https://repository.apache.org/content/repositories/orgapacheflink-1742
> > (flink version 1.20.0) to create the job JAR.
> > 3. I ran flink from the binaries within
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc0/flink-1.20.0-bin-scala_2.12.tgz
> > .
> > 4. The job ran as expected when I produced to the input topic with ~500k
> > msgs and consumed from the output topic.
> >
> > Best,
> > Santwana
> >
> > On Fri, Jun 28, 2024 at 9:39 PM weijie guo 
> > wrote:
> >
> > > Hi everyone,
> > >
> > >
> > > The release candidate #0(i.e. RC0) for Apache Flink 1.20.0 has been
> > > created.
> > >
> > >
> > > This RC is currently for preview only to facilitate the integrated
> > testing
> > > and
> > >
> > > we don't have to vote on it.
> > >
> > >
> > > RC1 is expected to be released a week later If we find no new blocker
> in
> > > RC0.
> > >
> > > The related voting process will be triggered once the announcement is
> > > ready.
> > >
> > >
> > > The RC0 has all the artifacts that we would typically have for a
> release,
> > > except
> > >
> > > for the release note and the website pull request for the release
> > > announcement.
> > >
> > >
> > > The following contents are available for your review:
> > >
> > >
> > > - The preview source release and binary convenience releases [1], which
> > >
> > > are signed with the key with fingerprint
> > > 8D56AE6E7082699A4870750EA4E8C4C05EE6861F [2].
> > >
> > >
> > > - All artifacts that would normally be deployed to the Maven
> > > Central Repository [3].
> > >
> > >
> > > - Source code tag "release-1.20.0-rc0" [4]
> > >
> > >
> > > Your help testing the release will be greatly appreciated! And we'll
> > >
> > > create the RC1 release and the voting thread as soon as all the efforts
> > are
> > >
> > > finished.
> > >
> > >
> > > [1] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc0/
> > >
> > > [2] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >
> > > [3]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1742/
> > >
> > > [4] https://github.com/apache/flink/releases/tag/release-1.20.0-rc0
> > >
> > >
> > > Best,
> > >
> > > Robert, Rui, Ufuk, Weijie
> > >
> >
>


[jira] [Created] (FLINK-35756) FileSystemTableSourceStreamingITCase.testSourceWithRegexPattern produced no output for 900 seconds

2024-07-03 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35756:
--

 Summary: 
FileSystemTableSourceStreamingITCase.testSourceWithRegexPattern produced no 
output for 900 seconds
 Key: FLINK-35756
 URL: https://issues.apache.org/jira/browse/FLINK-35756
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.18.1
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35754) SqlGatewayE2ECase.testMaterializedTableInFullMode failed due to Internal Server Error

2024-07-03 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35754:
--

 Summary: SqlGatewayE2ECase.testMaterializedTableInFullMode failed 
due to Internal Server Error
 Key: FLINK-35754
 URL: https://issues.apache.org/jira/browse/FLINK-35754
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35753) ParquetColumnarRowInputFormatTest.testContinuousRepetition(int) failed due to ArrayIndexOutOfBoundsException

2024-07-03 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35753:
--

 Summary: 
ParquetColumnarRowInputFormatTest.testContinuousRepetition(int) failed due to 
ArrayIndexOutOfBoundsException
 Key: FLINK-35753
 URL: https://issues.apache.org/jira/browse/FLINK-35753
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-shaded 19.0, release candidate #1

2024-07-02 Thread weijie guo
+1 (binding)

- Verified hash and signature
- Build from source
- Verified website PR

Best regards,

Weijie


Martijn Visser  于2024年7月1日周一 21:30写道:

> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> - Verified licenses
> - Verified web PRs
>
> On Fri, Jun 28, 2024 at 2:02 PM Timo Walther  wrote:
>
> > +1 (binding)
> >
> > Thanks for fixing the JSON functions!
> >
> > Timo
> >
> > On 28.06.24 12:54, Dawid Wysakowicz wrote:
> > > Hi everyone,
> > > Please review and vote on the release candidate 1 for the version 19.0,
> > as
> > > follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > > which are signed with the key with fingerprint
> > > EA93A435B4E2C9B4C9F533F631D2DD10BFC15A2D [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag release-19.0-rc1 [5],
> > > * website pull request listing the new release [6].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Dawid
> > >
> > > [1] https://issues.apache.org/jira/projects/FLINK/versions/12353853
> > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-shaded-19.0-rc1
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1743/
> > > [5]
> https://github.com/apache/flink-shaded/releases/tag/release-19.0-rc1
> > > [6] https://github.com/apache/flink-web/pull/749
> > >
> >
> >
>


[ANNOUNCE] Release 1.20.0, release candidate #0

2024-06-28 Thread weijie guo
Hi everyone,


The release candidate #0(i.e. RC0) for Apache Flink 1.20.0 has been created.


This RC is currently for preview only to facilitate the integrated testing and

we don't have to vote on it.


RC1 is expected to be released a week later If we find no new blocker in RC0.

The related voting process will be triggered once the announcement is ready.


The RC0 has all the artifacts that we would typically have for a release,
except

for the release note and the website pull request for the release
announcement.


The following contents are available for your review:


- The preview source release and binary convenience releases [1], which

are signed with the key with fingerprint
8D56AE6E7082699A4870750EA4E8C4C05EE6861F [2].


- All artifacts that would normally be deployed to the Maven
Central Repository [3].


- Source code tag "release-1.20.0-rc0" [4]


Your help testing the release will be greatly appreciated! And we'll

create the RC1 release and the voting thread as soon as all the efforts are

finished.


[1] https://dist.apache.org/repos/dist/dev/flink/flink-1.20.0-rc0/

[2] https://dist.apache.org/repos/dist/release/flink/KEYS

[3] https://repository.apache.org/content/repositories/orgapacheflink-1742/

[4] https://github.com/apache/flink/releases/tag/release-1.20.0-rc0


Best,

Robert, Rui, Ufuk, Weijie


[jira] [Created] (FLINK-35712) [Release-1.20] Prepare RC0 release

2024-06-26 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35712:
--

 Summary: [Release-1.20] Prepare RC0 release
 Key: FLINK-35712
 URL: https://issues.apache.org/jira/browse/FLINK-35712
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.20.0
Reporter: Weijie Guo
Assignee: Weijie Guo
 Fix For: 1.20.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35711) LocalRecoveryITCase failed on AZP due to exit code: 127

2024-06-26 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35711:
--

 Summary: LocalRecoveryITCase failed on AZP due to exit code: 127
 Key: FLINK-35711
 URL: https://issues.apache.org/jira/browse/FLINK-35711
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35710) ClusterEntrypointTest failed on AZP due to exit code: 239

2024-06-26 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35710:
--

 Summary: ClusterEntrypointTest failed on AZP due to exit code: 239
 Key: FLINK-35710
 URL: https://issues.apache.org/jira/browse/FLINK-35710
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[SUMMARY] Flink 1.20 Release Sync 06/25/2024

2024-06-25 Thread weijie guo
Dear devs,


This is the second meeting after feature freeze of Flink 1.20.


I'd like to share the information synced in the meeting.


-*Cut branch*


 release-1.20 branch has been cut on Tuesday, and we also updated the
version of master branch to 2.0-SNAPSHOT(confirmed with all RMs of
2.0)


 For PRs that should be presented in 1.20.0, please make sure:

* Merge the PR into both master and release-1.20 branches

* The JIRA ticket should be closed with the correct fix-versions (1.20.0).


-*Cross-team release testing*


 Release testing is already start and in the meantime, there're still serval

flips which need to be confirmed whether cross-team testinig is required[1].

RM had created related tickets includes all the features listed on the 1.20

wiki page[2] as well as other actually completed flips.


 Also contributors are encouraged to create tickets if there are other ones

that need to be cross-team tested (Just create new ticket for testing using

title 'Release Testing: Verify ...' without 'Instructions' keyword).


Progress of Release Testing(We plan to finish all by the end of next week):

   - total 13 flip/features, 9 confirmed,  4 wait for response

   - testing instructions ready: 9  (1 assigned)


-*Blockers*


 Congrats, no known blocker for now.


-*Release notes [Highlights]*


New features and behavior changes which without the 'Release Note', please

help to fill out column in the JIRA(click the Edit button and pull the page

to the center), which is important for users and will be part of the
release announcement.


-*Sync meeting[3]*


 The next meeting is 07/02/2024 10am (UTC+2) and 4pm (UTC+8), please
feel free to join us.


Lastly, we encourage attendees to fill out the topics to be discussed at
the bottom of 1.20 wiki page[2] a day in advance, to make it easier for
everyone to understand the background of the topics, thanks!


[1] https://issues.apache.org/jira/browse/FLINK-35602

[2] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release

[3] https://meet.google.com/mtj-huez-apu


Best,

Robert, Rui, Ufuk, Weijie


[ANNOUNCE] release-1.20 branch cut

2024-06-24 Thread weijie guo
Hi devs,


The release-1.20 branch has been forked out from the master branch, with

commit ID 987b7beb0540c2bf452cbf2fe07f3a1f512f0e6f.


The version on the master branch has been upgraded to 2.0-SNAPSHOT according to

the 1.19 → 1.20 → 2.0 release sequence
mentioned in 2.0 release wiki page[1]

and also confirmed with all RMs of 2.0.


>From now on, for PRs that should be presented in 1.20.0, please make sure:

* Merge the PR into both master and release-1.20 branches

* The JIRA ticket should be closed with the correct fix-versions (1.20.0).


Release testing is already start, there're serval flips which need to be

confirmed whether cross-team testinig is required[2].


Also contributors are encouraged to create tickets if there are other ones

that need to be cross-team tested (Just create new ticket for testing using

title 'Release Testing: Verify ...' without 'Instructions' keyword).


We plan to finish all release testing within two weeks, and please
update the “X-team

verified” column in the 1.20 release wiki page [3] in the meantime.


Also, we’d like to thank all contributors who put effort into stabilizing

the CI on the master branch in the past weeks, and look forward to

stabilizing new features in the coming weeks.


We also plan to build the rc0 release recently. It's only for testing purposes

and we encourage contributors to test it out and give us any feedback.


Good luck with your release testing!


[1] https://cwiki.apache.org/confluence/display/FLINK/2.0+Release

[2] https://issues.apache.org/jira/browse/FLINK-35602

[3] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release


Best regards,

Robert, Rui, Ufuk and Weijie


[jira] [Created] (FLINK-35683) CLONE - Verify that no exclusions were erroneously added to the japicmp plugin

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35683:
--

 Summary: CLONE - Verify that no exclusions were erroneously added 
to the japicmp plugin
 Key: FLINK-35683
 URL: https://issues.apache.org/jira/browse/FLINK-35683
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo


Verify that no exclusions were erroneously added to the japicmp plugin that 
break compatibility guarantees. Check the exclusions for the 
japicmp-maven-plugin in the root pom (see 
[apache/flink:pom.xml:2175ff|https://github.com/apache/flink/blob/3856c49af77601cf7943a5072d8c932279ce46b4/pom.xml#L2175]
 for exclusions that:
* For minor releases: break source compatibility for {{@Public}} APIs
* For patch releases: break source/binary compatibility for 
{{@Public}}/{{@PublicEvolving}}  APIs
Any such exclusion must be properly justified, in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35678) CLONE - Review and update documentation

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35678:
--

 Summary: CLONE - Review and update documentation
 Key: FLINK-35678
 URL: https://issues.apache.org/jira/browse/FLINK-35678
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.19.0
Reporter: Weijie Guo
 Fix For: 1.19.0


There are a few pages in the documentation that need to be reviewed and updated 
for each release.
 * Ensure that there exists a release notes page for each non-bugfix release 
(e.g., 1.5.0) in {{{}./docs/release-notes/{}}}, that it is up-to-date, and 
linked from the start page of the documentation.
 * Upgrading Applications and Flink Versions: 
[https://ci.apache.org/projects/flink/flink-docs-master/ops/upgrading.html]
 * ...

 

h3. Expectations
 * Update upgrade compatibility table 
([apache-flink:./docs/content/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content/docs/ops/upgrading.md#compatibility-table]
 and 
[apache-flink:./docs/content.zh/docs/ops/upgrading.md|https://github.com/apache/flink/blob/master/docs/content.zh/docs/ops/upgrading.md#compatibility-table]).
 * Update [Release Overview in 
Confluence|https://cwiki.apache.org/confluence/display/FLINK/Release+Management+and+Feature+Plan]
 * (minor only) The documentation for the new major release is visible under 
[https://nightlies.apache.org/flink/flink-docs-release-$SHORT_RELEASE_VERSION] 
(after at least one [doc 
build|https://github.com/apache/flink/actions/workflows/docs.yml] succeeded).
 * (minor only) The documentation for the new major release does not contain 
"-SNAPSHOT" in its version title, and all links refer to the corresponding 
version docs instead of {{{}master{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35679) CLONE - Cross team testing

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35679:
--

 Summary: CLONE - Cross team testing
 Key: FLINK-35679
 URL: https://issues.apache.org/jira/browse/FLINK-35679
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


For user facing features that go into the release we'd like to ensure they can 
actually _be used_ by Flink users. To achieve this the release managers ensure 
that an issue for cross team testing is created in the Apache Flink Jira. This 
can and should be picked up by other community members to verify the 
functionality and usability of the feature.
The issue should contain some entry points which enables other community 
members to test it. It should not contain documentation on how to use the 
feature as this should be part of the actual documentation. The cross team 
tests are performed after the feature freeze. Documentation should be in place 
before that. Those tests are manual tests, so do not confuse them with 
automated tests.
To sum that up:
 * User facing features should be tested by other contributors
 * The scope is usability and sanity of the feature
 * The feature needs to be already documented
 * The contributor creates an issue containing some pointers on how to get 
started (e.g. link to the documentation, suggested targets of verification)
 * Other community members pick those issues up and provide feedback
 * Cross team testing happens right after the feature freeze

 

h3. Expectations
 * Jira issues for each expected release task according to the release plan is 
created and labeled as {{{}release-testing{}}}.
 * All the created release-testing-related Jira issues are resolved and the 
corresponding blocker issues are fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35680) CLONE - Review Release Notes in JIRA

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35680:
--

 Summary: CLONE - Review Release Notes in JIRA
 Key: FLINK-35680
 URL: https://issues.apache.org/jira/browse/FLINK-35680
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: lincoln lee


JIRA automatically generates Release Notes based on the {{Fix Version}} field 
applied to issues. Release Notes are intended for Flink users (not Flink 
committers/contributors). You should ensure that Release Notes are informative 
and useful.

Open the release notes from the version status page by choosing the release 
underway and clicking Release Notes.

You should verify that the issues listed automatically by JIRA are appropriate 
to appear in the Release Notes. Specifically, issues should:
 * Be appropriately classified as {{{}Bug{}}}, {{{}New Feature{}}}, 
{{{}Improvement{}}}, etc.
 * Represent noteworthy user-facing changes, such as new functionality, 
backward-incompatible API changes, or performance improvements.
 * Have occurred since the previous release; an issue that was introduced and 
fixed between releases should not appear in the Release Notes.
 * Have an issue title that makes sense when read on its own.

Adjust any of the above properties to the improve clarity and presentation of 
the Release Notes.

Ensure that the JIRA release notes are also included in the release notes of 
the documentation (see section "Review and update documentation").
h4. Content of Release Notes field from JIRA tickets 

To get the list of "release notes" field from JIRA, you can ran the following 
script using JIRA REST API (notes the maxResults limits the number of entries):
{code:bash}
curl -s 
https://issues.apache.org/jira//rest/api/2/search?maxResults=200=project%20%3D%20FLINK%20AND%20%22Release%20Note%22%20is%20not%20EMPTY%20and%20fixVersion%20%3D%20${RELEASE_VERSION}
 | jq '.issues[]|.key,.fields.summary,.fields.customfield_12310192' | paste - - 
-
{code}
{{jq}}  is present in most Linux distributions and on MacOS can be installed 
via brew.

 

h3. Expectations
 * Release Notes in JIRA have been audited and adjusted



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35682) CLONE - Create a release branch

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35682:
--

 Summary: CLONE - Create a release branch
 Key: FLINK-35682
 URL: https://issues.apache.org/jira/browse/FLINK-35682
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.19.0
Reporter: Weijie Guo
Assignee: lincoln lee
 Fix For: 1.19.0


If you are doing a new minor release, you need to update Flink version in the 
following repositories and the [AzureCI project 
configuration|https://dev.azure.com/apache-flink/apache-flink/]:
 * [apache/flink|https://github.com/apache/flink]
 * [apache/flink-docker|https://github.com/apache/flink-docker]
 * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]

Patch releases don't require the these repositories to be touched. Simply 
checkout the already existing branch for that version:
{code:java}
$ git checkout release-$SHORT_RELEASE_VERSION
{code}
h4. Flink repository

Create a branch for the new version that we want to release before updating the 
master branch to the next development version:
{code:bash}
$ cd ./tools
tools $ releasing/create_snapshot_branch.sh
tools $ git checkout master
tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
{code}
In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 as the last entry:
{code:java}
// ...
v1_12("1.12"),
v1_13("1.13"),
v1_14("1.14"),
v1_15("1.15"),
v1_16("1.16");
{code}

Additionally in master, update the branch list of the GitHub Actions nightly 
workflow (see 
[apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
 The two most-recent releases and master should be covered.

The newly created branch and updated {{master}} branch need to be pushed to the 
official repository.
h4. Flink Docker Repository

Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
[apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
sure that 
[apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
 points to the correct snapshot version; for {{dev-x.y}} it should point to 
{{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).

After pushing the new minor release branch, as the last step you should also 
update the documentation workflow to also build the documentation for the new 
release branch. Check [Managing 
Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
 on details on how to do that. You may also want to manually trigger a build to 
make the changes visible as soon as possible.

h4. Flink Benchmark Repository
First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
[apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
we can have a branch named {{dev-x.y}} which could be built on top of 
(${{CURRENT_SNAPSHOT_VERSION}}).

Then, inside the repository you need to manually update the {{flink.version}} 
property inside the parent *pom.xml* file. It should be pointing to the most 
recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
{code:xml}
1.18-SNAPSHOT
{code}

h4. AzureCI Project Configuration
The new release branch needs to be configured within AzureCI to make azure 
aware of the new release branch. This matter can only be handled by Ververica 
employees since they are owning the AzureCI setup.
 

h3. Expectations (Minor Version only if not stated otherwise)
 * Release branch has been created and pushed
 * Changes on the new release branch are picked up by [Azure 
CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
 * {{master}} branch has the version information updated to the new version 
(check pom.xml files and 
 * 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 enum)
 *  
[apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]
 should have the new release branch included
 * New version is added to the 
[apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
 enum.
 * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
{{dev-x.y}} branch an

[jira] [Created] (FLINK-35681) CLONE - Select executing Release Manager

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35681:
--

 Summary: CLONE - Select executing Release Manager
 Key: FLINK-35681
 URL: https://issues.apache.org/jira/browse/FLINK-35681
 Project: Flink
  Issue Type: Sub-task
  Components: Release System
Affects Versions: 1.19.0
Reporter: Weijie Guo
Assignee: lincoln lee
 Fix For: 1.19.0


h4. GPG Key

You need to have a GPG key to sign the release artifacts. Please be aware of 
the ASF-wide [release signing 
guidelines|https://www.apache.org/dev/release-signing.html]. If you don’t have 
a GPG key associated with your Apache account, please create one according to 
the guidelines.

Determine your Apache GPG Key and Key ID, as follows:
{code:java}
$ gpg --list-keys
{code}
This will list your GPG keys. One of these should reflect your Apache account, 
for example:
{code:java}
--
pub   2048R/845E6689 2016-02-23
uid  Nomen Nescio 
sub   2048R/BA4D50BE 2016-02-23
{code}
In the example above, the key ID is the 8-digit hex string in the {{pub}} line: 
{{{}845E6689{}}}.

Now, add your Apache GPG key to the Flink’s {{KEYS}} file in the [Apache Flink 
release KEYS file|https://dist.apache.org/repos/dist/release/flink/KEYS] 
repository at [dist.apache.org|http://dist.apache.org/]. Follow the 
instructions listed at the top of these files. (Note: Only PMC members have 
write access to the release repository. If you end up getting 403 errors ask on 
the mailing list for assistance.)

Configure {{git}} to use this key when signing code by giving it your key ID, 
as follows:
{code:java}
$ git config --global user.signingkey 845E6689
{code}
You may drop the {{--global}} option if you’d prefer to use this key for the 
current repository only.

You may wish to start {{gpg-agent}} to unlock your GPG key only once using your 
passphrase. Otherwise, you may need to enter this passphrase hundreds of times. 
The setup for {{gpg-agent}} varies based on operating system, but may be 
something like this:
{code:bash}
$ eval $(gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info)
$ export GPG_TTY=$(tty)
$ export GPG_AGENT_INFO
{code}
h4. Access to Apache Nexus repository

Configure access to the [Apache Nexus 
repository|https://repository.apache.org/], which enables final deployment of 
releases to the Maven Central Repository.
 # You log in with your Apache account.
 # Confirm you have appropriate access by finding {{org.apache.flink}} under 
{{{}Staging Profiles{}}}.
 # Navigate to your {{Profile}} (top right drop-down menu of the page).
 # Choose {{User Token}} from the dropdown, then click {{{}Access User 
Token{}}}. Copy a snippet of the Maven XML configuration block.
 # Insert this snippet twice into your global Maven {{settings.xml}} file, 
typically {{{}${HOME}/.m2/settings.xml{}}}. The end result should look like 
this, where {{TOKEN_NAME}} and {{TOKEN_PASSWORD}} are your secret tokens:
{code:xml}

   
 
   apache.releases.https
   TOKEN_NAME
   TOKEN_PASSWORD
 
 
   apache.snapshots.https
   TOKEN_NAME
   TOKEN_PASSWORD
 
   
 
{code}

h4. Website development setup

Get ready for updating the Flink website by following the [website development 
instructions|https://flink.apache.org/contributing/improve-website.html].
h4. GNU Tar Setup for Mac (Skip this step if you are not using a Mac)

The default tar application on Mac does not support GNU archive format and 
defaults to Pax. This bloats the archive with unnecessary metadata that can 
result in additional files when decompressing (see [1.15.2-RC2 vote 
thread|https://lists.apache.org/thread/mzbgsb7y9vdp9bs00gsgscsjv2ygy58q]). 
Install gnu-tar and create a symbolic link to use in preference of the default 
tar program.
{code:bash}
$ brew install gnu-tar
$ ln -s /usr/local/bin/gtar /usr/local/bin/tar
$ which tar
{code}
 

h3. Expectations
 * Release Manager’s GPG key is published to 
[dist.apache.org|http://dist.apache.org/]
 * Release Manager’s GPG key is configured in git configuration
 * Release Manager's GPG key is configured as the default gpg key.
 * Release Manager has {{org.apache.flink}} listed under Staging Profiles in 
Nexus
 * Release Manager’s Nexus User Token is configured in settings.xml



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35676) CLONE - Create a new version in JIRA

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35676:
--

 Summary: CLONE - Create a new version in JIRA
 Key: FLINK-35676
 URL: https://issues.apache.org/jira/browse/FLINK-35676
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo
Assignee: Matthias Pohl


When contributors resolve an issue in JIRA, they are tagging it with a release 
that will contain their changes. With the release currently underway, new 
issues should be resolved against a subsequent future release. Therefore, you 
should create a release item for this subsequent release, as follows:
 # In JIRA, navigate to the [Flink > Administration > 
Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
 # Add a new release: choose the next minor version number compared to the one 
currently underway, select today’s date as the Start Date, and choose Add.
(Note: Only PMC members have access to the project administration. If you do 
not have access, ask on the mailing list for assistance.)

 

h3. Expectations
 * The new version should be listed in the dropdown menu of {{fixVersion}} or 
{{affectedVersion}} under "unreleased versions" when creating a new Jira issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35677) CLONE - Triage release-blocking issues in JIRA

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35677:
--

 Summary: CLONE - Triage release-blocking issues in JIRA
 Key: FLINK-35677
 URL: https://issues.apache.org/jira/browse/FLINK-35677
 Project: Flink
  Issue Type: Sub-task
Reporter: Weijie Guo


There could be outstanding release-blocking issues, which should be triaged 
before proceeding to build a release candidate. We track them by assigning a 
specific Fix version field even before the issue resolved.

The list of release-blocking issues is available at the version status page. 
Triage each unresolved issue with one of the following resolutions:
 * If the issue has been resolved and JIRA was not updated, resolve it 
accordingly.
 * If the issue has not been resolved and it is acceptable to defer this until 
the next release, update the Fix Version field to the new version you just 
created. Please consider discussing this with stakeholders and the dev@ mailing 
list, as appropriate.
 ** When using "Bulk Change" functionality of Jira
 *** First, add the newly created version to Fix Version for all unresolved 
tickets that have old the old version among its Fix Versions.
 *** Afterwards, remove the old version from the Fix Version.
 * If the issue has not been resolved and it is not acceptable to release until 
it is fixed, the release cannot proceed. Instead, work with the Flink community 
to resolve the issue.

 

h3. Expectations
 * There are no release blocking JIRA issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35675) Prepare Flink 1.20 Release

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35675:
--

 Summary: Prepare Flink 1.20 Release
 Key: FLINK-35675
 URL: https://issues.apache.org/jira/browse/FLINK-35675
 Project: Flink
  Issue Type: New Feature
  Components: Release System
Affects Versions: 1.19.0
Reporter: Weijie Guo
Assignee: lincoln lee
 Fix For: 1.19.0


This umbrella issue is meant as a test balloon for moving the [release 
documentation|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release]
 into Jira.
h3. Prerequisites
h4. Environment Variables

Commands in the subtasks might expect some of the following enviroment 
variables to be set accordingly to the version that is about to be released:
{code:bash}
RELEASE_VERSION="1.5.0"
SHORT_RELEASE_VERSION="1.5"
CURRENT_SNAPSHOT_VERSION="$SHORT_RELEASE_VERSION-SNAPSHOT"
NEXT_SNAPSHOT_VERSION="1.6-SNAPSHOT"
SHORT_NEXT_SNAPSHOT_VERSION="1.6"
{code}
h4. Build Tools

All of the following steps require to use Maven 3.8.6 and Java 8. Modify your 
PATH environment variable accordingly if needed.
h4. Flink Source
 * Create a new directory for this release and clone the Flink repository from 
Github to ensure you have a clean workspace (this step is optional).
 * Run {{mvn -Prelease clean install}} to ensure that the build processes that 
are specific to that profile are in good shape (this step is optional).

The rest of this instructions assumes that commands are run in the root (or 
{{./tools}} directory) of a repository on the branch of the release version 
with the above environment variables set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35673) TestFileSource$TestFileSourceBuilder cannot be cast to class FileSource$FileSourceBuilder

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35673:
--

 Summary: TestFileSource$TestFileSourceBuilder cannot be cast to 
class FileSource$FileSourceBuilder 
 Key: FLINK-35673
 URL: https://issues.apache.org/jira/browse/FLINK-35673
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.20.0
Reporter: Weijie Guo
Assignee: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35672) testPreAggregatedSlidingTimeWindow failed due to due to checkpoint expired before completing

2024-06-23 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35672:
--

 Summary: testPreAggregatedSlidingTimeWindow failed due to due to 
checkpoint expired before completing
 Key: FLINK-35672
 URL: https://issues.apache.org/jira/browse/FLINK-35672
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35652) FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-19 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35652:
--

 Summary: FLIP-462: Support Custom Data Distribution for Input 
Stream of Lookup Join
 Key: FLINK-35652
 URL: https://issues.apache.org/jira/browse/FLINK-35652
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[RESULT][VOTE] FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-19 Thread weijie guo
Hi devs,


I'm happy to announce that FLIP-462: Support Custom Data Distribution
for Input Stream of Lookup Join [1] has been accepted with 5 approving
votes (3 binding) [2]:


- Xintong Song (binding)

- Lincoln Lee (binding)

- Jingsong Li  (binding)

- Zhanghao Chen (non-binding)

- Feng Jin (non-binding)


There are no disapproving votes. Thanks to everyone who participated
in the discussion and voting.

Best regards,

Weijie


[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-462+Support+Custom+Data+Distribution+for+Input+Stream+of+Lookup+Join

[2] https://lists.apache.org/thread/xoz07zcdnk1kgo4dr4wywmr1s54x8lfh


Re: [ANNOUNCE] Flink 1.20 feature freeze

2024-06-19 Thread weijie guo
+1 for Xintong's point.

Hi Ferenc,

After discussing this with all the other RM's, we decided not to merge the
PR you mentioned into 1.20.
The main reason is that FLIP-464 was voted in three days ago, which is
already after feature freeze date. It doesn't make sense to us to put it in
the 1.20 release cycle.


Best regards,

Weijie


Xintong Song  于2024年6月19日周三 17:25写道:

> Hi Ferenc,
>
> About the deprecation process, removing a @Public API requires the API to
> be deprecated for at least 2 minor releases AND the removal should be in a
> major version bump. That means you cannot remove a @Public API in 2.x when
> x is not 0. The tricky part is, run-application as a command-line interface
> does not have
> any @Public/@PublicEvolving/@Experimental/@Deprecated annotations (nor
> CliFrontend, the class it is defined in). Command-line interfaces are
> definitely public APIs IMO, but there's no explicit deprecation process for
> them, which also means we never committed that command-line interfaces will
> stay compatible.
>
> My suggestion would be to start a separate discussion thread on whether and
> how to add explicit compatibility guarantees for command-line interfaces.
> Without that, "legally" we can change command-line interfaces anytime,
> though I'd be negative doing so.
>
> As for the PR, I'd be in favor of not merging it for 1.20. Because we have
> passed the feature freeze date for half a week and the PR is still not yet
> reviewed, and the necessity for making it into 1.20 is unclear due to
> absence of explicit process.
>
> Best,
>
> Xintong
>
>
>
> On Wed, Jun 19, 2024 at 1:33 AM Ferenc Csaky 
> wrote:
>
> > Hi Robert, Rui, Ufuk, and Weijie,
> >
> > I would like to raise the PR about merging `flink run` and
> > `flink run-application` functionality [1] to get considered as
> > part of the 1.20 release.
> >
> > The reason IMO is that the `run-application` CLI command should be
> > removed in the same release when Per-Job mode gets removed. AFAIK
> > when we deprecate a public API, it has to stay for 2 minor
> > releases to give time for users to adapt. According to that, if
> > `run-application` is deprecated in Flink 2.0, it can get removed
> > in Flink 2.3. Currently the drop of per-job mode is blocked [2]
> > and probably it will not be resolved for a while, but I could
> > imagine it would be possible in 2.1 or 2.2.
> >
> > The change itself is rather small and concise, and Marton Balassi
> > volunteered to review it ASAP.
> >
> > Pls. correct me if I am wrong about the deprecation process.
> >
> > Looking forward to your opinion!
> >
> > Thanks,
> > Ferenc
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-35625
> > [2] https://issues.apache.org/jira/browse/FLINK-26000
> >
> >
> > On Tuesday, 18 June 2024 at 11:27, weijie guo  >
> > wrote:
> >
> > >
> > >
> > > Hi Zakelly,
> > >
> > > Thank you for informing us!
> > >
> > > After discussion, all RMs agreed that this was an important fix that
> > should
> > > be merged into 1.20.
> > >
> > > So feel free to merge it.
> > >
> > > Best regards,
> > >
> > > Weijie
> > >
> > >
> > > Zakelly Lan zakelly@gmail.com 于2024年6月15日周六 16:29写道:
> > >
> > > > Hi Robert, Rui, Ufuk and Weijie,
> > > >
> > > > Thanks for the update!
> > > >
> > > > FYI: This PR[1] fixes & cleanup the left-over checkpoint directories
> > for
> > > > file-merging on TM exit. And the second commit fixes the wrong state
> > handle
> > > > usage. We encountered several unexpected CI fails, so we missed the
> > feature
> > > > freeze time. It is better to have this PR in 1.20 so I will merge
> this
> > if
> > > > you agree. Thanks.
> > > >
> > > > [1] https://github.com/apache/flink/pull/24933
> > > >
> > > > Best,
> > > > Zakelly
> > > >
> > > > On Sat, Jun 15, 2024 at 6:00 AM weijie guo guoweijieres...@gmail.com
> > > > wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > The feature freeze of 1.20 has started now. That means that no new
> > > > > features
> > > > >
> > > > > or improvements should now be merged into the master branch unless
> > you
> > > > > ask
> > > > >
> > > > > the release managers first, which h

[jira] [Created] (FLINK-35636) Streaming File Sink s3 end-to-end test did not finish after 900 seconds

2024-06-18 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35636:
--

 Summary: Streaming File Sink s3 end-to-end test did not finish 
after 900 seconds
 Key: FLINK-35636
 URL: https://issues.apache.org/jira/browse/FLINK-35636
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.17.2
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[SUMMARY] Flink 1.20 Release Sync 06/18/2024

2024-06-18 Thread weijie guo
Dear devs,


This is the first meeting after feature freeze of Flink 1.20.


I'd like to share the information synced in the meeting.



- Feature Freeze


 We have announced the feature freeze of 1.20 at June 15.

 Finally, we have 13 completed features. Meanwhile, 3 features were
moved to the next release, but none of them are must-have for 1.20.



- Blockers:


 - FLINK-35629 - Performance regression in stringRead and stringWrite

-
We haven't found the root cause yet, but Zakelly will try JMH 1.19 to
exclude the impact of environment change.


 -
FLINK-35587 - job fails with "The read buffer is null in credit-based
input channel" on TPC-DS 10TB benchmark

-
This is a serious bug that influence our credit-base network protocol.
Junrui has identified the commit
that caused the problem. Weijie
will investigate it with the highest priority.



- Cutting release branch


 We are planning to cut the release branch on next Monday (June 24).


 For the convenience of testing, we decided to out *rc0*
 release a few days after cut-branch, please note that it is only for
testing purpose and is not intended to be a final release.


 We'll make announcement separately in the dev mailing list for both of them.



- Release Testing


 We have created an umbrella JIRA[1] for release testing of 1.20.
please check and complete the
documentation and test instruction of your new feature and

mark the related JIRA
issue in the 1.20 release wiki page [2] before we start testing, which
would be quite helpful for other developers to validate your features.



- Sync meeting[3]:


 Due to the time conflict of some RM, we decided to advance the start
time of release sync meeting by
*half an hour*.
The next meeting is 06/25/2024 9.30am (UTC+2) and 3.30pm (UTC+8),
please feel free to join us.



Lastly, we encourage attendees to fill out the topics to be discussed at
the bottom of 1.20 wiki page[2] a day in advance, to make it easier for
everyone to understand the background of the topics, thanks!


Best,

Robert, Rui, Ufuk, Weijie



[1] https://issues.apache.org/jira/browse/FLINK-35602

[2] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release

[3] https://meet.google.com/mtj-huez-apu


Re: [ANNOUNCE] Flink 1.20 feature freeze

2024-06-18 Thread weijie guo
Hi Zakelly,

Thank you for informing us!

After discussion, all RMs agreed that this was an important fix that should
be merged into 1.20.

So feel free to merge it.

Best regards,

Weijie


Zakelly Lan  于2024年6月15日周六 16:29写道:

> Hi Robert, Rui, Ufuk and Weijie,
>
> Thanks for the update!
>
> FYI: This PR[1] fixes & cleanup the left-over checkpoint directories for
> file-merging on TM exit. And the second commit fixes the wrong state handle
> usage. We encountered several unexpected CI fails, so we missed the feature
> freeze time. It is better to have this PR in 1.20 so I will merge this if
> you agree. Thanks.
>
>
> [1] https://github.com/apache/flink/pull/24933
>
> Best,
> Zakelly
>
> On Sat, Jun 15, 2024 at 6:00 AM weijie guo 
> wrote:
>
> > Hi everyone,
> >
> >
> > The feature freeze of 1.20 has started now. That means that no new
> features
> >
> > or improvements should now be merged into the master branch unless you
> ask
> >
> > the release managers first, which has already been done for PRs, or
> pending
> >
> > on CI to pass. Bug fixes and documentation PRs can still be merged.
> >
> >
> >
> > - *Cutting release branch*
> >
> >
> > Currently we have no blocker issues(beside tickets that used for
> > release-testing).
> >
> > We are planning to cut the release branch on next Friday (June 21) if
> > no new test instabilities, and we'll make another announcement in the
> > dev mailing list then.
> >
> >
> >
> > - *Cross-team testing*
> >
> >
> > The release testing will start right after we cut the release branch,
> which
> >
> > is expected to come in the next week. As a prerequisite of it, we have
> > created
> >
> > the corresponding instruction ticket in FLINK-35602 [1], please check
> > and complete the
> >
> > documentation and test instruction of your new feature and mark the
> > related JIRA
> >
> > issue in the 1.20 release wiki page [2] before we start testing, which
> >
> > would be quite helpful for other developers to validate your features.
> >
> >
> >
> > Best regards,
> >
> > Robert, Rui, Ufuk and Weijie
> >
> >
> > [1]https://issues.apache.org/jira/browse/FLINK-35602
> >
> > [2] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release
> >
>


Re: [VOTE] FLIP-461: Synchronize rescaling with checkpoint creation to minimize reprocessing for the AdaptiveScheduler

2024-06-17 Thread weijie guo
+1(binding)

Best regards,

Weijie


Gyula Fóra  于2024年6月17日周一 18:12写道:

> +1 (binding)
>
> Gyula
>
> On Mon, Jun 17, 2024 at 11:29 AM Zakelly Lan 
> wrote:
>
> > +1 (binding)
> >
> >
> > Best,
> > Zakelly
> >
> > On Mon, Jun 17, 2024 at 5:10 PM Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > +1 (binding)
> > >
> > > Best,
> > > Rui
> > >
> > > On Mon, Jun 17, 2024 at 4:58 PM David Morávek  >
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > On Mon, Jun 17, 2024 at 10:24 AM Matthias Pohl 
> > > wrote:
> > > >
> > > > > Hi everyone,
> > > > > the discussion in [1] about FLIP-461 [2] is kind of concluded. I am
> > > > > starting a vote on this one here.
> > > > >
> > > > > The vote will be open for at least 72 hours (i.e. until June 20,
> > 2024;
> > > > > 8:30am UTC) unless there are any objections. The FLIP will be
> > > considered
> > > > > accepted if 3 binding votes (from active committers according to
> the
> > > > Flink
> > > > > bylaws [3]) are gathered by the community.
> > > > >
> > > > > Best,
> > > > > Matthias
> > > > >
> > > > > [1]
> https://lists.apache.org/thread/nnkonmsv8xlk0go2sgtwnphkhrr5oc3y
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-461%3A+Synchronize+rescaling+with+checkpoint+creation+to+minimize+reprocessing+for+the+AdaptiveScheduler
> > > > > [3]
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/Flink+Bylaws#FlinkBylaws-Approvals
> > > > >
> > > >
> > >
> >
>


[VOTE] FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-16 Thread weijie guo
Hi everyone,


Thanks for all the feedback about the FLIP-462: Support Custom Data
Distribution for Input Stream of Lookup Join [1]. The discussion
thread is here [2].


The vote will be open for at least 72 hours unless there is an
objection or insufficient votes.


Best,

Weijie



[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-462+Support+Custom+Data+Distribution+for+Input+Stream+of+Lookup+Join


[2] https://lists.apache.org/thread/kds2zrcdmykrz5lmn0hf9m4phdl60nfb


[ANNOUNCE] Flink 1.20 feature freeze

2024-06-14 Thread weijie guo
Hi everyone,


The feature freeze of 1.20 has started now. That means that no new features

or improvements should now be merged into the master branch unless you ask

the release managers first, which has already been done for PRs, or pending

on CI to pass. Bug fixes and documentation PRs can still be merged.



- *Cutting release branch*


Currently we have no blocker issues(beside tickets that used for
release-testing).

We are planning to cut the release branch on next Friday (June 21) if
no new test instabilities, and we'll make another announcement in the
dev mailing list then.



- *Cross-team testing*


The release testing will start right after we cut the release branch, which

is expected to come in the next week. As a prerequisite of it, we have created

the corresponding instruction ticket in FLINK-35602 [1], please check
and complete the

documentation and test instruction of your new feature and mark the related JIRA

issue in the 1.20 release wiki page [2] before we start testing, which

would be quite helpful for other developers to validate your features.



Best regards,

Robert, Rui, Ufuk and Weijie


[1]https://issues.apache.org/jira/browse/FLINK-35602

[2] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release


[jira] [Created] (FLINK-35621) Release Testing Instructions: Verify FLIP-436: Introduce Catalog-related Syntax

2024-06-14 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35621:
--

 Summary: Release Testing Instructions: Verify FLIP-436: Introduce 
Catalog-related Syntax
 Key: FLINK-35621
 URL: https://issues.apache.org/jira/browse/FLINK-35621
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Common
Reporter: Weijie Guo
Assignee: Ahmed Hamdy
 Fix For: 1.20.0


Follow up the test for https://issues.apache.org/jira/browse/FLINK-35435



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35602) [Umbrella] Test Flink Release 1.20

2024-06-14 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35602:
--

 Summary: [Umbrella] Test Flink Release 1.20
 Key: FLINK-35602
 URL: https://issues.apache.org/jira/browse/FLINK-35602
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Re: Re: [DISCUSS] FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-14 Thread weijie guo
Hi all,

Thanks for all the feedback and suggestions so far.

If there is no further comment, we will open the voting thread next monday.

Best regards,

Weijie


weijie guo  于2024年6月14日周五 15:49写道:

> Thanks Lincoln for the quick response.
>
> > Since we've decided to extend a new hint option 'shuffle' to the current
> `LOOKUP` join hint, do we support hash shuffle as well?(It seems like it
> shouldn't require a lot of extra work, right?) This will deliver a
> complete new feature to users,  also because
> FLIP-204 is stale for now and this new extension will give user a more
> simpler way to achieve the goal, WDYT?
>
> Yes, I think this makes more sense.
>
> In a word: If the target dim table does not
> implement SupportsLookupCustomShuffle, the planner will try best to apply
> customer partitioning for the input stream. Otherwise, the planner will try
> best to apply a hash partitioning.
>
> As for FLIP-204, I think we can discuss whether it should be discarded or
> refactored in a separate thread. TBH, I think the current approach can
> completely cover it and be much easier to use.
> > "upsert mode" should be "updating stream" or "non-insert-only stream".
>
> Thanks, updated the FLIP.
>
>
>
> Best regards,
>
> Weijie
>
>
> Lincoln Lee  于2024年6月13日周四 23:08写道:
>
>> Thanks Weijie & Wencong for your update including the conclusions of
>> the offline discussion.
>>
>> There's one thing need to be confirmed in the FLIP:
>> > The hint only provides a suggestion to the optimizer, it is not an
>> enforcer. As a result, If the target dim table not implements
>> SupportsLookupCustomShuffle, planner will ignore this newly introduced
>> shuffle option.
>>
>> Since we've decided to extend a new hint option 'shuffle' to the current
>> `LOOKUP` join hint, do we support hash shuffle as well?(It seems like it
>> shouldn't require a lot of extra work, right?)
>> This will deliver a complete new feature to users,  also because
>> FLIP-204 is stale for now and this new extension will give user a more
>> simpler way to achieve the goal, WDYT?
>>
>> Another small comment for the new interface:
>> > "... planner may not apply this partitioner in upsert mode ..."
>> > default boolean isDeterministic()
>> "upsert mode" should be "updating stream" or "non-insert-only stream".
>>
>>
>> Best,
>> Lincoln Lee
>>
>>
>> Wencong Liu  于2024年6月12日周三 21:43写道:
>>
>> > Hi Jingsong,
>> >
>> >
>> > Some of the points you mentioned are currently clarified in
>> > the updated FLIP. Please check it out.
>> >
>> >
>> > 1. Enabling custom data distribution can be done through the
>> > LOOKUP SQL Hint. There are detailed examples provided in the FLIP.
>> >
>> >
>> > 2. We will add the isDeterministic method to the `InputDataPartitioner`
>> > interface, which will return true by default. If the
>> > `InputDataPartitioner`
>> > is not deterministic, the connector developer need to override the
>> > isDeterministic method to return false. If the connector developer
>> > cannot ensure this protocol, they will need to bear the correctness
>> > issues that arise.
>> >
>> >
>> > 3. Yes, this feature will work in batch mode as well.
>> >
>> >
>> > Best regards,
>> > Wencong
>> >
>> >
>> >
>> >
>> >
>> > At 2024-06-11 23:47:40, "Jingsong Li"  wrote:
>> > >Hi all,
>> > >
>> > >+1 to this FLIP, very thanks all for your proposal.
>> > >
>> > >isDeterministic looks good to me too.
>> > >
>> > >We can consider stating the following points:
>> > >
>> > >1. How to enable custom data distribution? Is it a dynamic hint? Can
>> > >you provide an SQL example.
>> > >
>> > >2. What impact will it have when the mainstream is changelog? Causing
>> > >disorder? This may need to be emphasized.
>> > >
>> > >3. Does this feature work in batch mode too?
>> > >
>> > >Best,
>> > >Jingsong
>> > >
>> > >On Tue, Jun 11, 2024 at 8:22 PM Wencong Liu 
>> wrote:
>> > >>
>> > >> Hi Lincoln,
>> > >>
>> > >>
>> > >> Thanks for your reply. Weijie and I discussed these two issues
>> offline,
>> > >> and here are the results 

Re: Re: Re: [DISCUSS] FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-14 Thread weijie guo
> > >> after UPDATE_BEFORE/DELETE events (-D, -U), thus breaking the current
> > >> limitations of the Flink Sink Operator[2]. If `isDeterministic`
> returns
> > false and the
> > >> changelog event type is not insert-only, the Planner should not apply
> > the shuffle
> > >> provided by `SupportsLookupCustomShuffle`.
> > >>
> > >>
> > >> [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-204%3A+Introduce+Hash+Lookup+Join
> > >> [2]
> >
> https://www.ververica.com/blog/flink-sql-secrets-mastering-the-art-of-changelog-event-out-of-orderness
> > >>
> > >>
> > >> Best,
> > >> Wencong
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> At 2024-06-11 00:02:57, "Lincoln Lee"  wrote:
> > >> >Hi Weijie,
> > >> >
> > >> >Thanks for your proposal, this will be a useful advanced optimization
> > for
> > >> >connector developers!
> > >> >
> > >> >I have two questions:
> > >> >
> > >> >1. FLIP-204[1] hash lookup join hint is mentioned in this FLIP,
> what's
> > the
> > >> >apply ordering of the two feature? For example, a connector that
> > >> >implements the `SupportsLookupCustomShuffle` interface also has a
> > >> >`SHUFFLE_HASH` lookup join hint specified by the user in sql, what's
> > >> >the expected behavior?
> > >> >
> > >> >2. This FLIP considers the relationship with NDU processing, and I
> > agree
> > >> >with the current choice to prioritize NDU first. However, we should
> > also
> > >> >consider another issue: out-of-orderness of the changelog events in
> > >> >streaming[2]. If the connector developer supplies a non-deterministic
> > >> >partitioner, e.g., a random partitioner for anti-skew purpose, then
> > it'll
> > >> >break the assumption relied by current SQL operators in streaming:
> the
> > >> >ADD/UDPATE_AFTER events (+I, +U) always occur before its related
> > >> >UDPATE_BEFORE/DELETE events (-D, -U) and they are always
> > >> >processed by the same task even if a data shuffle is involved. So a
> > >> >straightforward approach would be to add method `isDeterministic` to
> > >> >the `InputDataPartitioner` interface to explicitly tell the planner
> > whether
> > >> >the partitioner is deterministic or not(then the planner can reject
> the
> > >> >non-deterministic custom partitioner for correctness requirements).
> > >> >
> > >> >[1]
> > >> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-204%3A+Introduce+Hash+Lookup+Join
> > >> >[2]
> > >> >
> >
> https://www.ververica.com/blog/flink-sql-secrets-mastering-the-art-of-changelog-event-out-of-orderness
> > >> >
> > >> >
> > >> >Best,
> > >> >Lincoln Lee
> > >> >
> > >> >
> > >> >Xintong Song  于2024年6月7日周五 13:53写道:
> > >> >
> > >> >> +1 for this proposal.
> > >> >>
> > >> >> This FLIP will make it possible for each lookup join parallel task
> > to only
> > >> >> access and cache a subset of the data. This will significantly
> > improve the
> > >> >> performance and reduce the overhead when using Paimon for the
> > dimension
> > >> >> table. And it's general enough to also be leveraged by other
> > connectors.
> > >> >>
> > >> >> Best,
> > >> >>
> > >> >> Xintong
> > >> >>
> > >> >>
> > >> >>
> > >> >> On Fri, Jun 7, 2024 at 10:01 AM weijie guo <
> > guoweijieres...@gmail.com>
> > >> >> wrote:
> > >> >>
> > >> >> > Hi devs,
> > >> >> >
> > >> >> >
> > >> >> > I'd like to start a discussion about FLIP-462[1]: Support Custom
> > Data
> > >> >> > Distribution for Input Stream of Lookup Join.
> > >> >> >
> > >> >> >
> > >> >> > Lookup Join is an important feature in Flink, It is typically
> used
> > to
> > >> >> > enrich a table with data that is queried from an external system.
> > >> >> > If we interact with the external systems for each incoming
> record,
> > we
> > >> >> > incur significant network IO and RPC overhead.
> > >> >> >
> > >> >> > Therefore, most connectors introduce caching to reduce the
> > per-record
> > >> >> > level query overhead. However, because the data distribution of
> > Lookup
> > >> >> > Join's input stream is arbitrary, the cache hit rate is sometimes
> > >> >> > unsatisfactory.
> > >> >> >
> > >> >> >
> > >> >> > We want to introduce a mechanism for the connector to tell the
> > Flink
> > >> >> > planner its desired input stream data distribution or
> partitioning
> > >> >> > strategy. This can significantly reduce the amount of cached data
> > and
> > >> >> > improve performance of Lookup Join.
> > >> >> >
> > >> >> >
> > >> >> > You can find more details in this FLIP[1]. Looking forward to
> > hearing
> > >> >> > from you, thanks!
> > >> >> >
> > >> >> >
> > >> >> > Best regards,
> > >> >> >
> > >> >> > Weijie
> > >> >> >
> > >> >> >
> > >> >> > [1]
> > >> >> >
> > >> >> >
> > >> >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-462+Support+Custom+Data+Distribution+for+Input+Stream+of+Lookup+Join
> > >> >> >
> > >> >>
> >
>


[jira] [Created] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-14 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35601:
--

 Summary: InitOutputPathTest.testErrorOccursUnSynchronized failed 
due to NoSuchFieldException
 Key: FLINK-35601
 URL: https://issues.apache.org/jira/browse/FLINK-35601
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35591) Azure Pipelines not running for master since c9def981

2024-06-13 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35591:
--

 Summary: Azure Pipelines not running for master since c9def981
 Key: FLINK-35591
 URL: https://issues.apache.org/jira/browse/FLINK-35591
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-464: Merge "flink run" and "flink run-application"

2024-06-12 Thread weijie guo
Thanks for driving this!

+1(binding)

Best regards,

Weijie


Xintong Song  于2024年6月13日周四 09:04写道:

> +1(binding)
>
> Best,
>
> Xintong
>
>
>
> On Thu, Jun 13, 2024 at 5:15 AM Márton Balassi 
> wrote:
>
> > +1 (binding)
> >
> > On Wed, Jun 12, 2024 at 7:25 PM Őrhidi Mátyás 
> > wrote:
> >
> > > Sounds reasonable,
> > > +1
> > >
> > > Cheers,
> > > Matyas
> > >
> > >
> > > On Wed, Jun 12, 2024 at 8:54 AM Mate Czagany 
> wrote:
> > >
> > > > Hi,
> > > >
> > > > Thank you for driving this Ferenc,
> > > > +1 (non-binding)
> > > >
> > > > Regards,
> > > > Mate
> > > >
> > > > Ferenc Csaky  ezt írta (időpont: 2024.
> > jún.
> > > > 12., Sze, 17:23):
> > > >
> > > > > Hello devs,
> > > > >
> > > > > I would like to start a vote about FLIP-464 [1]. The FLIP is about
> to
> > > > > merge back the
> > > > > "flink run-application" functionality to "flink run", so the latter
> > > will
> > > > > be capable to deploy jobs in
> > > > > all deployment modes. More details in the FLIP. Discussion thread
> > [2].
> > > > >
> > > > > The vote will be open for at least 72 hours (until 2024 March 23
> > 14:03
> > > > > UTC) unless there
> > > > > are any objections or insufficient votes.
> > > > >
> > > > > Thanks,Ferenc
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=311626179
> > > > > [2]
> https://lists.apache.org/thread/xh58xs0y58kqjmfvd4yor79rv6dlcg5g
> > > >
> > >
> >
>


Re: [DISCUSS] FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-12 Thread weijie guo
Hi Zhanghao,

Thanks for the reply!

> Could you give a more concrete example in production on when a custom
partitioning strategy will outperform partitioning by key

The key point here is partitioning logic cannot be fully expressed with all
or part of the join key. That is, even if we know which fields are used to
calculate buckets, still have to face the following problem:

1. The mapping from the bucket field to the bucket id is not necessarily
done via hashcode, and even if it is, the hash algorithm may be different
from the one used in Flink. The planner can't know how to do this mapping.
2. In order to get the bucket id, we have to mod the bucket number, but
planner has no notion of bucket number.



Best regards,

Weijie


Zhanghao Chen  于2024年6月12日周三 13:55写道:

> Thanks for driving this, Weijie. Usually, the data distribution of the
> external system is closely related to the keys, e.g. computing the bucket
> index by key hashcode % bucket num, so I'm not sure about how much
> difference there are between partitioning by key and a custom partitioning
> strategy. Could you give a more concrete example in production on when a
> custom partitioning strategy will outperform partitioning by key? Since
> you've mentioned Paimon in doc, maybe an example on Paimon.
>
> Best,
> Zhanghao Chen
> ________
> From: weijie guo 
> Sent: Friday, June 7, 2024 9:59
> To: dev 
> Subject: [DISCUSS] FLIP-462: Support Custom Data Distribution for Input
> Stream of Lookup Join
>
> Hi devs,
>
>
> I'd like to start a discussion about FLIP-462[1]: Support Custom Data
> Distribution for Input Stream of Lookup Join.
>
>
> Lookup Join is an important feature in Flink, It is typically used to
> enrich a table with data that is queried from an external system.
> If we interact with the external systems for each incoming record, we
> incur significant network IO and RPC overhead.
>
> Therefore, most connectors introduce caching to reduce the per-record
> level query overhead. However, because the data distribution of Lookup
> Join's input stream is arbitrary, the cache hit rate is sometimes
> unsatisfactory.
>
>
> We want to introduce a mechanism for the connector to tell the Flink
> planner its desired input stream data distribution or partitioning
> strategy. This can significantly reduce the amount of cached data and
> improve performance of Lookup Join.
>
>
> You can find more details in this FLIP[1]. Looking forward to hearing
> from you, thanks!
>
>
> Best regards,
>
> Weijie
>
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-462+Support+Custom+Data+Distribution+for+Input+Stream+of+Lookup+Join
>


Re: [June 15 Feature Freeze][SUMMARY] Flink 1.20 Release Sync 11/06/2024

2024-06-11 Thread weijie guo
Thanks Zhanghao for the feedback.

Please feel free to change the state of this one to `won't make it`.


Best regards,

Weijie


Zhanghao Chen  于2024年6月12日周三 13:18写道:

> Hi Rui,
>
> Thanks for the summary! A quick update here: FLIP-398 was decided not to
> go into 1.20, as it was just found that the effort to add dedicated
> serialization support for Maps, Sets and Lists, will break
> state-compatibility. I will revert the relevant changes soon.
>
> Best,
> Zhanghao Chen
> 
> From: Rui Fan <1996fan...@gmail.com>
> Sent: Wednesday, June 12, 2024 12:59
> To: dev 
> Subject: [June 15 Feature Freeze][SUMMARY] Flink 1.20 Release Sync
> 11/06/2024
>
> Dear devs,
>
> This is the sixth meeting for Flink 1.20 release[1] cycle.
>
> I'd like to share the information synced in the meeting.
>
> - Feature Freeze
>
> It is worth noting that there are only 3 days left until the
> feature freeze time(June 15, 2024, 00:00 CEST(UTC+2)),
> and developers need to pay attention to the feature freeze time.
>
> After checked with all contributors of 1.20 FLIPs, we don't need
> to postpone the feature freeze time. Please reply to this email
> if other features are valuable and it's better to be merged in 1.20,
> thanks.
>
> - Features:
>
> So far we've had 16 flips/features:
> - 6 flips/features are done
> - 8 flips/features are doing and release managers checked with
> corresponding contributors
>   - 7 of these flips/features can be completed before June 15, 2024, 00:00
> CEST(UTC+2)
>   - We were unable to contact the contributor of FLIP-436
> - 2 flips/features won't make in 1.20
>
> - Blockers:
>
> We don't have any blocker right now, thanks to everyone who fixed blockers
> before.
>
> - Sync meeting[2]:
>
> The next meeting is 18/06/2024 10am (UTC+2) and 4pm (UTC+8), please
> feel free to join us.
>
> Lastly, we encourage attendees to fill out the topics to be discussed at
> the bottom of 1.20 wiki page[1] a day in advance, to make it easier for
> everyone to understand the background of the topics, thanks!
>
> [1] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release
> [2] https://meet.google.com/mtj-huez-apu
>
> Best,
> Robert, Weijie, Ufuk and Rui
>


Re: [June 15 Feature Freeze][SUMMARY] Flink 1.20 Release Sync 11/06/2024

2024-06-11 Thread weijie guo
Thanks Rui for the summary!

Best regards,

Weijie


Rui Fan <1996fan...@gmail.com> 于2024年6月12日周三 13:00写道:

> Dear devs,
>
> This is the sixth meeting for Flink 1.20 release[1] cycle.
>
> I'd like to share the information synced in the meeting.
>
> - Feature Freeze
>
> It is worth noting that there are only 3 days left until the
> feature freeze time(June 15, 2024, 00:00 CEST(UTC+2)),
> and developers need to pay attention to the feature freeze time.
>
> After checked with all contributors of 1.20 FLIPs, we don't need
> to postpone the feature freeze time. Please reply to this email
> if other features are valuable and it's better to be merged in 1.20,
> thanks.
>
> - Features:
>
> So far we've had 16 flips/features:
> - 6 flips/features are done
> - 8 flips/features are doing and release managers checked with
> corresponding contributors
>   - 7 of these flips/features can be completed before June 15, 2024, 00:00
> CEST(UTC+2)
>   - We were unable to contact the contributor of FLIP-436
> - 2 flips/features won't make in 1.20
>
> - Blockers:
>
> We don't have any blocker right now, thanks to everyone who fixed blockers
> before.
>
> - Sync meeting[2]:
>
> The next meeting is 18/06/2024 10am (UTC+2) and 4pm (UTC+8), please
> feel free to join us.
>
> Lastly, we encourage attendees to fill out the topics to be discussed at
> the bottom of 1.20 wiki page[1] a day in advance, to make it easier for
> everyone to understand the background of the topics, thanks!
>
> [1] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release
> [2] https://meet.google.com/mtj-huez-apu
>
> Best,
> Robert, Weijie, Ufuk and Rui
>


[jira] [Created] (FLINK-35563) 'Run kubernetes application test' failed on AZP

2024-06-10 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35563:
--

 Summary: 'Run kubernetes application test' failed on AZP
 Key: FLINK-35563
 URL: https://issues.apache.org/jira/browse/FLINK-35563
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35562) WindowTableFunctionProcTimeRestoreTest produced no output for 900 seconds

2024-06-10 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35562:
--

 Summary: WindowTableFunctionProcTimeRestoreTest produced no output 
for 900 seconds
 Key: FLINK-35562
 URL: https://issues.apache.org/jira/browse/FLINK-35562
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release 1.19.1, release candidate #1

2024-06-08 Thread weijie guo
Thanks Hong!

+1(binding)

- Verified gpg signature
- Verified sha512 hash
- Checked gh release tag
- Checked all artifacts deployed to maven repo
- Ran a simple wordcount job on local standalone cluster
- Compiled from source code with JDK 1.8.0_291.

Best regards,

Weijie


Xiqian YU  于2024年6月7日周五 18:23写道:

> +1 (non-binding)
>
>
>   *   Checked download links & release tags
>   *   Verified that package checksums matched
>   *   Compiled Flink from source code with JDK 8 / 11
>   *   Ran E2e data integration test jobs on local cluster
>
> Regards,
> yux
>
> De : Rui Fan <1996fan...@gmail.com>
> Date : vendredi, 7 juin 2024 à 17:14
> À : dev@flink.apache.org 
> Objet : Re: [VOTE] Release 1.19.1, release candidate #1
> +1(binding)
>
> - Reviewed the flink-web PR (Left some comments)
> - Checked Github release tag
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Start the cluster locally with jdk8, and run the StateMachineExample job,
> it works fine.
>
> Best,
> Rui
>
> On Thu, Jun 6, 2024 at 11:39 PM Hong Liang  wrote:
>
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the flink v1.19.1,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint B78A5EA1 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.19.1-rc1" [5],
> > * website pull request listing the new release and adding announcement
> blog
> > post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Hong
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354399
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.19.1-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1736/
> > [5] https://github.com/apache/flink/releases/tag/release-1.19.1-rc1
> > [6] https://github.com/apache/flink-web/pull/745
> >
>


Re: [VOTE] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-07 Thread weijie guo
+1 (binding)

Best regards,

Weijie


Zhu Zhu  于2024年6月7日周五 16:48写道:

> +1 (binding)
>
> Thanks,
> Zhu
>
> Xintong Song  于2024年6月7日周五 16:08写道:
>
> > +1 (binding)
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Fri, Jun 7, 2024 at 4:03 PM Yuxin Tan  wrote:
> >
> > > Hi everyone,
> > >
> > > Thanks for all the feedback about the FLIP-459 Support Flink
> > > hybrid shuffle integration with Apache Celeborn[1].
> > > The discussion thread is here [2].
> > >
> > > I'd like to start a vote for it. The vote will be open for at least
> > > 72 hours unless there is an objection or insufficient votes.
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-459%3A+Support+Flink+hybrid+shuffle+integration+with+Apache+Celeborn
> > > [2] https://lists.apache.org/thread/gy7sm7qrf7yrv1rl5f4vtk5fo463ts33
> > >
> > > Best,
> > > Yuxin
> > >
> >
>


Re: [VOTE] Release flink-connector-cassandra v3.2.0, release candidate #1

2024-06-06 Thread weijie guo
Thanks Danny!

+1(binding)

- Verified signatures and hash sum
- Checked the CI build from tag
- Build from source
- Reviewed flink-web PR

Best regards,

Weijie


Rui Fan <1996fan...@gmail.com> 于2024年6月7日周五 11:01写道:

> Thanks Danny for the hard work!
>
> +1(binding)
>
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Checked Github release tag
> - Reviewed the flink-web PR
>
> Best,
> Rui
>
> On Wed, May 22, 2024 at 8:01 PM Leonard Xu  wrote:
>
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with java 1.8 succeeded
> > - checked Github release tag
> > - checked release notes status which only left one issue is used for
> > release tracking
> > - reviewed the web PR
> >
> > Best,
> > Leonard
> >
> > > 2024年5月22日 下午6:10,weijie guo  写道:
> > >
> > > +1(non-binding)
> > >
> > > -Validated checksum hash
> > > -Verified signature
> > > -Build from source
> > >
> > > Best regards,
> > >
> > > Weijie
> > >
> > >
> > > Hang Ruan  于2024年5月22日周三 10:12写道:
> > >
> > >> +1 (non-binding)
> > >>
> > >> - Validated checksum hash
> > >> - Verified signature
> > >> - Verified that no binaries exist in the source archive
> > >> - Build the source with Maven and jdk8
> > >> - Verified web PR
> > >> - Check that the jar is built by jdk8
> > >>
> > >> Best,
> > >> Hang
> > >>
> > >> Muhammet Orazov  于2024年5月22日周三
> 04:15写道:
> > >>
> > >>> Hey all,
> > >>>
> > >>> Could we please get some more votes to proceed with the release?
> > >>>
> > >>> Thanks and best,
> > >>> Muhammet
> > >>>
> > >>> On 2024-04-22 13:04, Danny Cranmer wrote:
> > >>>> Hi everyone,
> > >>>>
> > >>>> Please review and vote on release candidate #1 for
> > >>>> flink-connector-cassandra v3.2.0, as follows:
> > >>>> [ ] +1, Approve the release
> > >>>> [ ] -1, Do not approve the release (please provide specific
> comments)
> > >>>>
> > >>>> This release supports Flink 1.18 and 1.19.
> > >>>>
> > >>>> The complete staging area is available for your review, which
> > includes:
> > >>>> * JIRA release notes [1],
> > >>>> * the official Apache source release to be deployed to
> > dist.apache.org
> > >>>> [2],
> > >>>> which are signed with the key with fingerprint 125FD8DB [3],
> > >>>> * all artifacts to be deployed to the Maven Central Repository [4],
> > >>>> * source code tag v3.2.0-rc1 [5],
> > >>>> * website pull request listing the new release [6].
> > >>>> * CI build of the tag [7].
> > >>>>
> > >>>> The vote will be open for at least 72 hours. It is adopted by
> majority
> > >>>> approval, with at least 3 PMC affirmative votes.
> > >>>>
> > >>>> Thanks,
> > >>>> Danny
> > >>>>
> > >>>> [1]
> > >>>>
> > >>>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353148
> > >>>> [2]
> > >>>>
> > >>>
> > >>
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-cassandra-3.2.0-rc1
> > >>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > >>>> [4]
> > >>>>
> > https://repository.apache.org/content/repositories/orgapacheflink-1722
> > >>>> [5]
> > >>>>
> > >>>
> > >>
> >
> https://github.com/apache/flink-connector-cassandra/releases/tag/v3.2.0-rc1
> > >>>> [6] https://github.com/apache/flink-web/pull/737
> > >>>> [7]
> > >>>>
> > >>>
> > >>
> >
> https://github.com/apache/flink-connector-cassandra/actions/runs/8784310241
> > >>>
> > >>
> >
> >
>


Re: [VOTE] Release flink-connector-jdbc v3.2.0, release candidate #2

2024-06-06 Thread weijie guo
Thanks Danny!

+1(binding)
- Verified signatures and hash sums
- Checked the CI build
- Checked the release note
- Reviewed the flink-web PR
- Build from source.

Best regards,

Weijie


Rui Fan <1996fan...@gmail.com> 于2024年6月7日周五 11:08写道:

> Thanks Danny for the hard work!
>
> +1(binding)
>
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Checked Github release tag
> - Reviewed the flink-web PR
>
> Best,
> Rui
>
> On Tue, Jun 4, 2024 at 1:31 PM gongzhongqiang 
> wrote:
>
> > +1 (non-binding)
> >
> > - Validated checksum hash and signature.
> > - Confirmed that no binaries exist in the source archive.
> > - Built the source with JDK 8.
> > - Verified the web PR.
> > - Ensured the JAR is built by JDK 8.
> >
> > Best,
> > Zhongqiang Gong
> >
> > Danny Cranmer  于2024年4月18日周四 18:20写道:
> >
> > > Hi everyone,
> > >
> > > Please review and vote on the release candidate #1 for the version
> 3.2.0,
> > > as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > This release supports Flink 1.18 and 1.19.
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint 125FD8DB [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v3.2.0-rc1 [5],
> > > * website pull request listing the new release [6].
> > > * CI run of tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Danny
> > >
> > > [1]
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353143
> > > [2]
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc2
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1718/
> > > [5]
> > https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc2
> > > [6] https://github.com/apache/flink-web/pull/734
> > > [7]
> > https://github.com/apache/flink-connector-jdbc/actions/runs/8736019099
> > >
> >
>


Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #2

2024-06-06 Thread weijie guo
Thanks Danny!

+1(binding)

- Verified signatures and hashsums
- Build from source
- Checked release tag
- Reviewed the flink-web PR
- Checked the CI build.

Best regards,

Weijie


Rui Fan <1996fan...@gmail.com> 于2024年6月7日周五 11:00写道:

> Thanks Danny for the hard work!
>
> +1(binding)
>
> - Verified signatures
> - Verified sha512 (hashsums)
> - The source archives do not contain any binaries
> - Build the source with Maven 3 and java8 (Checked the license as well)
> - Checked Github release tag
> - Reviewed the flink-web PR
>
> Best,
> Rui
>
> On Fri, May 31, 2024 at 11:47 AM gongzhongqiang  >
> wrote:
>
> > +1 (non-binding)
> >
> > - Validated the checksum hash and signature.
> > - No binaries exist in the source archive.
> > - Built the source with JDK 8 succeed.
> > - Verified the flink-web PR.
> > - Ensured the JAR is built by JDK 8.
> >
> > Best,
> > Zhongqiang Gong
> >
> > Danny Cranmer  于2024年4月19日周五 18:08写道:
> >
> > > Hi everyone,
> > >
> > > Please review and vote on release candidate #2 for flink-connector-aws
> > > v4.3.0, as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > This version supports Flink 1.18 and 1.19.
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint 125FD8DB [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v4.3.0-rc2 [5],
> > > * website pull request listing the new release [6].
> > > * CI build of the tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Release Manager
> > >
> > > [1]
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353793
> > > [2]
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.3.0-rc2
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1721/
> > > [5]
> > https://github.com/apache/flink-connector-aws/releases/tag/v4.3.0-rc2
> > > [6] https://github.com/apache/flink-web/pull/733
> > > [7]
> > https://github.com/apache/flink-connector-aws/actions/runs/8751694197
> > >
> >
>


[DISCUSS] FLIP-462: Support Custom Data Distribution for Input Stream of Lookup Join

2024-06-06 Thread weijie guo
Hi devs,


I'd like to start a discussion about FLIP-462[1]: Support Custom Data
Distribution for Input Stream of Lookup Join.


Lookup Join is an important feature in Flink, It is typically used to
enrich a table with data that is queried from an external system.
If we interact with the external systems for each incoming record, we
incur significant network IO and RPC overhead.

Therefore, most connectors introduce caching to reduce the per-record
level query overhead. However, because the data distribution of Lookup
Join's input stream is arbitrary, the cache hit rate is sometimes
unsatisfactory.


We want to introduce a mechanism for the connector to tell the Flink
planner its desired input stream data distribution or partitioning
strategy. This can significantly reduce the amount of cached data and
improve performance of Lookup Join.


You can find more details in this FLIP[1]. Looking forward to hearing
from you, thanks!


Best regards,

Weijie


[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-462+Support+Custom+Data+Distribution+for+Input+Stream+of+Lookup+Join


Re: [ANNOUNCE] New Apache Flink PMC Member - Fan Rui

2024-06-05 Thread weijie guo
Congratulations, Rui. Well-deserved!

Best regards,

Weijie


Zakelly Lan  于2024年6月5日周三 18:05写道:

> Congratulations, Rui!
>
> Best,
> Zakelly
>
> On Wed, Jun 5, 2024 at 6:02 PM Piotr Nowojski 
> wrote:
>
> > Hi everyone,
> >
> > On behalf of the PMC, I'm very happy to announce another new Apache Flink
> > PMC Member - Fan Rui.
> >
> > Rui has been active in the community since August 2019. During this time
> he
> > has contributed a lot of new features. Among others:
> >   - Decoupling Autoscaler from Kubernetes Operator, and supporting
> > Standalone Autoscaler
> >   - Improvements to checkpointing, flamegraphs, restart strategies,
> > watermark alignment, network shuffles
> >   - Optimizing the memory and CPU usage of large operators, greatly
> > reducing the risk and probability of TaskManager OOM
> >
> > He reviewed a significant amount of PRs and has been active both on the
> > mailing lists and in Jira helping to both maintain and grow Apache
> Flink's
> > community. He is also our current Flink 1.20 release manager.
> >
> > In the last 12 months, Rui has been the most active contributor in the
> > Flink Kubernetes Operator project, while being the 2nd most active Flink
> > contributor at the same time.
> >
> > Please join me in welcoming and congratulating Fan Rui!
> >
> > Best,
> > Piotrek (on behalf of the Flink PMC)
> >
>


Re: [DISCUSS] FLIP-459: Support Flink hybrid shuffle integration with Apache Celeborn

2024-06-04 Thread weijie guo
Thanks Yuxin for the proposal!

When we first proposed Hybrid Shuffle, I wanted to support pluggable
storage tier in the future. However, limited by the architecture of the
legacy Hybrid Shuffle at that time, this idea has not been realized. The
new architecture abstracts the tier nicely, and now it's time to introduce
support for external storage.

Big +1 for this one!

Best regards,

Weijie


rexxiong  于2024年6月5日周三 00:08写道:

> Thanks Yuxin for the proposal. +1,  as a member of the Apache Celeborn
> community, I am very excited about the integration of Flink's Hybrid
> Shuffle with Apache Celeborn. The whole design of CIP-6 looks good to me. I
> am looking forward to this integration.
>
> Thanks,
> Jiashu Xiong
>
> Ethan Feng  于2024年6月4日周二 16:47写道:
>
> > +1 for this proposal.
> >
> > After internally reviewing the prototype of CIP-6, this would improve
> > performance and stability for Flink users using Celeborn.
> >
> > Expect to see this feature come out to the community.
> >
> > As I come from the Celeborn community, I hope more users can try to
> > use Celeborn when there are Flink batch jobs.
> >
> > Thanks,
> > Ethan Feng
> >
> > Yuxin Tan  于2024年6月4日周二 16:34写道:
> > >
> > > Hi, Venkatakrishnan,
> > >
> > > Thanks for joining the discussion. We appreciate your interest
> > > in contributing to the work. Once the FLIP and CIP proposals
> > > have been approved, we will create some JIRA tickets in Flink
> > > and Celeborn projects. Please feel free to take a look at the
> > > tickets and select any that resonate with your interests.
> > >
> > > Best,
> > > Yuxin
> > >
> > >
> > > Venkatakrishnan Sowrirajan  于2024年5月31日周五 23:11写道:
> > >
> > > > Thanks for this FLIP. We are also interested in learning/contributing
> > to
> > > > the hybrid shuffle integration with celeborn for batch executions.
> > > >
> > > > On Tue, May 28, 2024, 7:07 PM Yuxin Tan 
> > wrote:
> > > >
> > > > > Hi, Xintong,
> > > > >
> > > > > >  I think we can also publish the prototype codes so the
> > > > > community can better understand and help with it.
> > > > >
> > > > > Ok, I agree on the point. I will prepare and publish the code
> > > > > recently.
> > > > >
> > > > > Rui,
> > > > >
> > > > > > Kindly reminder: the image of CIP-6[1] cannot be loaded.
> > > > >
> > > > > Thanks for the reminder. I've updated the images.
> > > > >
> > > > >
> > > > > Best,
> > > > > Yuxin
> > > > >
> > > > >
> > > > > Rui Fan <1996fan...@gmail.com> 于2024年5月29日周三 09:33写道:
> > > > >
> > > > > > Thanks Yuxin for driving this proposal!
> > > > > >
> > > > > > Kindly reminder: the image of CIP-6[1] cannot be loaded.
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > > >
> >
> https://urldefense.com/v3/__https://cwiki.apache.org/confluence/display/CELEBORN/CIP-6*Support*Flink*hybrid*shuffle*integration*with*Apache*Celeborn__;KysrKysrKys!!IKRxdwAv5BmarQ!ZRTc1aUSYMDBazuIwlet1Dzk2_DD9qKTgoDLH9jSwAVLgwplcuId_8JoXkH0i7AeWxKWXkL0sxM3AeW-H9OJ6v9uGw$
> > > > > >
> > > > > > Best,
> > > > > > Rui
> > > > > >
> > > > > > On Wed, May 29, 2024 at 9:03 AM Xintong Song <
> > tonysong...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 for this proposal.
> > > > > > >
> > > > > > > We have been prototyping this feature internally at Alibaba
> for a
> > > > > couple
> > > > > > of
> > > > > > > months. Yuxin, I think we can also publish the prototype codes
> > so the
> > > > > > > community can better understand and help with it.
> > > > > > >
> > > > > > > Best,
> > > > > > >
> > > > > > > Xintong
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, May 28, 2024 at 8:34 PM Yuxin Tan <
> > tanyuxinw...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I would like to start a discussion on FLIP-459 Support Flink
> > hybrid
> > > > > > > shuffle
> > > > > > > > integration with
> > > > > > > > Apache Celeborn[1]. Flink hybrid shuffle supports transitions
> > > > between
> > > > > > > > memory, disk, and
> > > > > > > > remote storage to improve performance and job stability.
> > > > > Concurrently,
> > > > > > > > Apache Celeborn
> > > > > > > > provides a stable, performant, scalable remote shuffle
> service.
> > > > This
> > > > > > > > integration proposal is to
> > > > > > > > harness the benefits from both hybrid shuffle and Celeborn
> > > > > > > simultaneously.
> > > > > > > >
> > > > > > > > Note that this proposal has two parts.
> > > > > > > > 1. The Flink-side modifications are in FLIP-459[1].
> > > > > > > > 2. The Celeborn-side changes are in CIP-6[2].
> > > > > > > >
> > > > > > > > Looking forward to everyone's feedback and suggestions. Thank
> > you!
> > > > > > > >
> > > > > > > > [1]
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
> 

[jira] [Created] (FLINK-35520) Nightly build can't compile as problems were detected from NoticeFileChecker

2024-06-04 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35520:
--

 Summary: Nightly build can't compile as problems were detected 
from NoticeFileChecker
 Key: FLINK-35520
 URL: https://issues.apache.org/jira/browse/FLINK-35520
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[June 15 Feature Freeze][SUMMARY] Flink 1.20 Release Sync 06/04/2024

2024-06-04 Thread weijie guo
Dear devs,


This is the fifth meeting for Flink 1.20 release cycle.


I'd like to share the information synced in the meeting.


- Feature Freeze


  It is worth noting that there are only 2 weeks left until the
feature freeze time(June 15, 2024),
and developers need to pay attention to the feature freeze time.


- Features:


  So far we've had 13 flips/features, there are two FLIPs that
unlikely to be done in 1.20, and the status of the other is good.

  It is encouraged to continuously updating
the 1.20 wiki page[1] for contributors.


- Blockers:


   [open] FLINK-35423 - ARRAY_EXCEPT should support set semantics

   -
   We need someone familiar with SQL part to review the pull request attached.


 By the way, we have closed two performance regression
blockers(FLINK-35040 and FLINK-35215), thanks to everyone involved!


- Notice


  CI pipeline triggered by pull request seems sort of unstable,
sometimes it doesn't trigger properly. I have create a ticket to track
it:
https://issues.apache.org/jira/browse/FLINK-35517


- Sync meeting[2]:


 The next meeting is 06/11/2024 10am (UTC+2) and 4pm (UTC+8), please
feel free to join us.
Lastly,

we encourage attendees to fill out the topics to be discussed at
the bottom of 1.20 wiki page[1] a day in advance,

to make it easier for
everyone to understand the background of the topics, thanks!


[1] https://cwiki.apache.org/confluence/display/FLINK/1.20+Release

[2] https://meet.google.com/mtj-huez-apu


Best,

Robert, Rui, Ufuk, Weijie


[jira] [Created] (FLINK-35517) CI pipeline triggered by pull request seems unstable

2024-06-04 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35517:
--

 Summary: CI pipeline triggered by pull request seems unstable
 Key: FLINK-35517
 URL: https://issues.apache.org/jira/browse/FLINK-35517
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo


Flink CI pipeline triggered by pull request seems sort of unstable. 

For example, https://github.com/apache/flink/pull/24883 was filed 15 hours ago, 
but CI report is UNKNOWN.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35505) RegionFailoverITCase.testMultiRegionFailover has never ever restored state

2024-06-02 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35505:
--

 Summary: RegionFailoverITCase.testMultiRegionFailover has never 
ever restored state
 Key: FLINK-35505
 URL: https://issues.apache.org/jira/browse/FLINK-35505
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Flink 1.18.2 release date

2024-05-30 Thread weijie guo
Hi Yang

IIRC, 1.18.2 has not been kicked off yet.

Best regards,

Weijie


Yang LI  于2024年5月30日周四 22:33写道:

> Dear Flink Community,
>
> Anyone know about the release date for 1.18.2?
>
> Thanks very much,
> Yang
>


Re: [DISCUSS] Connector releases for Flink 1.19

2024-05-30 Thread weijie guo
Hi Jing

> Do we have an umbrella ticket for Flink 1.19 connectors release?

FYI: https://issues.apache.org/jira/browse/FLINK-35131 :)

Best regards,

Weijie


Jing Ge  于2024年5月28日周二 20:29写道:

> Hi,
>
> Thanks Danny for driving it! Do we have an umbrella ticket for Flink 1.19
> connectors release?
>
> @Sergei
> Thanks for the hint wrt JDBC connector. Where could users know that it
> already supports 1.19?
>
> Best regards,
> Jing
>
> On Fri, May 17, 2024 at 4:07 AM Sergey Nuyanzin 
> wrote:
>
> > >, it looks like opensearch-2.0.0 has been created now, all good.
> > yep, thanks to Martijn
> >
> > I've created RCs for Opensearch connector
> >
> > On Tue, May 14, 2024 at 12:38 PM Danny Cranmer 
> > wrote:
> >
> > > Hello,
> > >
> > > @Sergey Nuyanzin , it looks like opensearch-2.0.0
> > > has been created now, all good.
> > >
> > > @Hongshun Wang, thanks, since the CDC connectors are not yet released I
> > > had omitted them from this task. But happy to include them, thanks for
> > the
> > > support.
> > >
> > > Thanks,
> > > Danny
> > >
> > > On Mon, May 13, 2024 at 3:40 AM Hongshun Wang  >
> > > wrote:
> > >
> > >> Hello Danny,
> > >> Thanks for pushing this forward.  I am available to assist with the
> CDC
> > >> connector[1].
> > >>
> > >> [1] https://github.com/apache/flink-cdc
> > >>
> > >> Best
> > >> Hongshun
> > >>
> > >> On Sun, May 12, 2024 at 8:48 PM Sergey Nuyanzin 
> > >> wrote:
> > >>
> > >> > I'm in a process of preparation of RC for OpenSearch connector
> > >> >
> > >> > however it seems I need PMC help: need to create opensearch-2.0.0 on
> > >> jira
> > >> > since as it was proposed in another ML[1] to have 1.x for OpenSearch
> > >> > v1 and 2.x for OpenSearch v2
> > >> >
> > >> > would be great if someone from PMC could help here
> > >> >
> > >> > [1]
> https://lists.apache.org/thread/3w1rnjp5y612xy5k9yv44hy37zm9ph15
> > >> >
> > >> > On Wed, Apr 17, 2024 at 12:42 PM Ferenc Csaky
> > >> >  wrote:
> > >> > >
> > >> > > Thank you Danny and Sergey for pushing this!
> > >> > >
> > >> > > I can help with the HBase connector if necessary, will comment the
> > >> > > details to the relevant Jira ticket.
> > >> > >
> > >> > > Best,
> > >> > > Ferenc
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Wednesday, April 17th, 2024 at 11:17, Danny Cranmer <
> > >> > dannycran...@apache.org> wrote:
> > >> > >
> > >> > > >
> > >> > > >
> > >> > > > Hello all,
> > >> > > >
> > >> > > > I have created a parent Jira to cover the releases [1]. I have
> > >> > assigned AWS
> > >> > > > and MongoDB to myself and OpenSearch to Sergey. Please assign
> the
> > >> > > > relevant issue to yourself as you pick up the tasks.
> > >> > > >
> > >> > > > Thanks!
> > >> > > >
> > >> > > > [1] https://issues.apache.org/jira/browse/FLINK-35131
> > >> > > >
> > >> > > > On Tue, Apr 16, 2024 at 2:41 PM Muhammet Orazov
> > >> > > > mor+fl...@morazow.com.invalid wrote:
> > >> > > >
> > >> > > > > Thanks Sergey and Danny for clarifying, indeed it
> > >> > > > > requires committer to go through the process.
> > >> > > > >
> > >> > > > > Anyway, please let me know if I can be any help.
> > >> > > > >
> > >> > > > > Best,
> > >> > > > > Muhammet
> > >> > > > >
> > >> > > > > On 2024-04-16 11:19, Danny Cranmer wrote:
> > >> > > > >
> > >> > > > > > Hello,
> > >> > > > > >
> > >> > > > > > I have opened the VOTE thread for the AWS connectors release
> > >> [1].
> > >> > > > > >
> > >> > > > > > > If I'm not mistaking (please correct me if I'm wrong) this
> > >> > request is
> > >> > > > > > > not
> > >> > > > > > > about version update it is about new releases for
> connectors
> > >> > > > > >
> > >> > > > > > Yes, correct. If there are any other code changes required
> > then
> > >> > help
> > >> > > > > > would be appreciated.
> > >> > > > > >
> > >> > > > > > > Are you going to create an umbrella issue for it?
> > >> > > > > >
> > >> > > > > > We do not usually create JIRA issues for releases. That
> being
> > >> said
> > >> > it
> > >> > > > > > sounds like a good idea to have one place to track the
> status
> > of
> > >> > the
> > >> > > > > > connector releases and pre-requisite code changes.
> > >> > > > > >
> > >> > > > > > > I would like to work on this task, thanks for initiating
> it!
> > >> > > > > >
> > >> > > > > > The actual release needs to be performed by a committer.
> > >> However,
> > >> > help
> > >> > > > > > getting the connectors building against Flink 1.19 and
> testing
> > >> the
> > >> > RC
> > >> > > > > > is
> > >> > > > > > appreciated.
> > >> > > > > >
> > >> > > > > > Thanks,
> > >> > > > > > Danny
> > >> > > > > >
> > >> > > > > > [1]
> > >> > https://lists.apache.org/thread/0nw9smt23crx4gwkf6p1dd4jwvp1g5s0
> > >> > > > > >
> > >> > > > > > On Tue, Apr 16, 2024 at 6:34 AM Sergey Nuyanzin
> > >> > snuyan...@gmail.com
> > >> > > > > > wrote:
> > >> > > > > >
> > >> > > > > > > Thanks for volunteering Muhammet!
> > >> > > > > > > And thanks Danny for starting the activity.
> > >> > > > > > >
> > 

Re: [VOTE] Release flink-connector-opensearch v1.2.0, release candidate #1

2024-05-30 Thread weijie guo
Thanks Sergey for driving this release!

+1(non-binding)

1. Verified signatures and hash sums
2. Build from source with 1.8.0_291 succeeded
3. Checked RN.

Best regards,

Weijie


Yuepeng Pan  于2024年5月30日周四 10:08写道:

> +1 (non-binding)
>
> - Built from source code with JDK 1.8 on MaxOS- Run examples locally.-
> Checked release notes Best, Yuepeng Pan
>
>
> At 2024-05-28 22:53:10, "gongzhongqiang" 
> wrote:
> >+1(non-binding)
> >
> >- Verified signatures and hash sums
> >- Reviewed the web PR
> >- Built from source code with JDK 1.8 on Ubuntu 22.04
> >- Checked release notes
> >
> >Best,
> >Zhongqiang Gong
> >
> >
> >Sergey Nuyanzin  于2024年5月16日周四 06:03写道:
> >
> >> Hi everyone,
> >> Please review and vote on release candidate #1 for
> >> flink-connector-opensearch v1.2.0, as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific comments)
> >>
> >>
> >> The complete staging area is available for your review, which includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release to be deployed to dist.apache.org
> >> [2],
> >> which are signed with the key with fingerprint
> >> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag v1.2.0-rc1 [5],
> >> * website pull request listing the new release [6].
> >> * CI build of the tag [7].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Note that this release is for Opensearch v1.x
> >>
> >> Thanks,
> >> Release Manager
> >>
> >> [1] https://issues.apache.org/jira/projects/FLINK/versions/12353812
> >> [2]
> >>
> >>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.2.0-rc1
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1734
> >> [5]
> >>
> >>
> https://github.com/apache/flink-connector-opensearch/releases/tag/v1.2.0-rc1
> >> [6] https://github.com/apache/flink-web/pull/740
> >> [7]
> >>
> >>
> https://github.com/apache/flink-connector-opensearch/actions/runs/9102334125
> >>
>


[jira] [Created] (FLINK-35487) ContinuousFileProcessingCheckpointITCase crashed as process exit with code 127

2024-05-29 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-35487:
--

 Summary: ContinuousFileProcessingCheckpointITCase crashed as 
process exit with code 127
 Key: FLINK-35487
 URL: https://issues.apache.org/jira/browse/FLINK-35487
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20.0
Reporter: Weijie Guo






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >