[jira] [Created] (FLINK-34351) Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint
Jane Chan created FLINK-34351: - Summary: Release Testing: Verify FLINK-33397 Support Configuring Different State TTLs using SQL Hint Key: FLINK-34351 URL: https://issues.apache.org/jira/browse/FLINK-34351 Project: Flink Issue Type: Sub-task Reporter: Jane Chan Assignee: Yubin Li -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] Release flink-connector-kafka v3.1.0, release candidate #1
Thanks for driving this, Martijn! +1 (binding) - Verified checksum and signature - Verified no binaries in source - Built from source with Java 8 - Reviewed web PRs - Run a Flink SQL job reading and writing Kafka on 1.18.1 cluster. Results are as expected. Best, Qingsheng On Tue, Jan 30, 2024 at 3:50 PM Mason Chen wrote: > +1 (non-binding) > > * Verified LICENSE and NOTICE files (this RC has a NOTICE file that points > to 2023 that has since been updated on the main branch by Hang) > * Verified hashes and signatures > * Verified no binaries > * Verified poms point to 3.1.0 > * Reviewed web PR > * Built from source > * Verified git tag > > In the same vein as the web PR, do we want to prepare the PR to update the > shortcode in the connector docs now [1]? Same for the Chinese version. I > wonder if that should be included in the connector release instructions. > > [1] > > https://github.com/apache/flink-connector-kafka/blob/d89a082180232bb79e3c764228c4e7dbb9eb6b8b/docs/content/docs/connectors/datastream/kafka.md#L39 > > Best, > Mason > > On Sun, Jan 28, 2024 at 11:41 PM Hang Ruan wrote: > > > +1 (non-binding) > > > > - Validated checksum hash > > - Verified signature > > - Verified that no binaries exist in the source archive > > - Build the source with Maven and jdk11 > > - Verified web PR > > - Check that the jar is built by jdk8 > > > > Best, > > Hang > > > > Martijn Visser 于2024年1月26日周五 21:05写道: > > > > > Hi everyone, > > > Please review and vote on the release candidate #1 for the Flink Kafka > > > connector version 3.1.0, as follows: > > > [ ] +1, Approve the release > > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > This release is compatible with Flink 1.17.* and Flink 1.18.* > > > > > > The complete staging area is available for your review, which includes: > > > * JIRA release notes [1], > > > * the official Apache source release to be deployed to dist.apache.org > > > [2], > > > which are signed with the key with fingerprint > > > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3], > > > * all artifacts to be deployed to the Maven Central Repository [4], > > > * source code tag v3.1.0-rc1 [5], > > > * website pull request listing the new release [6]. > > > > > > The vote will be open for at least 72 hours. It is adopted by majority > > > approval, with at least 3 PMC affirmative votes. > > > > > > Thanks, > > > Release Manager > > > > > > [1] > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353135 > > > [2] > > > > > > > > > https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.1.0-rc1 > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > [4] > > https://repository.apache.org/content/repositories/orgapacheflink-1700 > > > [5] > > > > https://github.com/apache/flink-connector-kafka/releases/tag/v3.1.0-rc1 > > > [6] https://github.com/apache/flink-web/pull/718 > > > > > >
Re: [VOTE] Release flink-connector-parent, release candidate #1
+1 (binding) - Verified checksum and signature - Verified pom content - Built flink-connector-kafka from source with the parent pom in staging Best, Qingsheng On Thu, Feb 1, 2024 at 11:19 PM Chesnay Schepler wrote: > - checked source/maven pom contents > > Please file a ticket to exclude tools/release from the source release. > > +1 (binding) > > On 29/01/2024 15:59, Maximilian Michels wrote: > > - Inspected the source for licenses and corresponding headers > > - Checksums and signature OK > > > > +1 (binding) > > > > On Tue, Jan 23, 2024 at 4:08 PM Etienne Chauchot > wrote: > >> Hi everyone, > >> > >> Please review and vote on the release candidate #1 for the version > >> 1.1.0, as follows: > >> > >> [ ] +1, Approve the release > >> [ ] -1, Do not approve the release (please provide specific comments) > >> > >> The complete staging area is available for your review, which includes: > >> * JIRA release notes [1], > >> * the official Apache source release to be deployed to dist.apache.org > >> [2], which are signed with the key with fingerprint > >> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3], > >> * all artifacts to be deployed to the Maven Central Repository [4], > >> * source code tag v1.1.0-rc1 [5], > >> * website pull request listing the new release [6] > >> > >> * confluence wiki: connector parent upgrade to version 1.1.0 that will > >> be validated after the artifact is released (there is no PR mechanism on > >> the wiki) [7] > >> > >> The vote will be open for at least 72 hours. It is adopted by majority > >> approval, with at least 3 PMC affirmative votes. > >> > >> Thanks, > >> > >> Etienne > >> > >> [1] > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12353442 > >> [2] > >> > https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc1 > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >> [4] > https://repository.apache.org/content/repositories/orgapacheflink-1698/ > >> [5] > >> > https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc1 > >> [6] https://github.com/apache/flink-web/pull/717 > >> > >> [7] > >> > https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development > >
[jira] [Created] (FLINK-34350) Hugo cannot rebuild site for zh doc
Jane Chan created FLINK-34350: - Summary: Hugo cannot rebuild site for zh doc Key: FLINK-34350 URL: https://issues.apache.org/jira/browse/FLINK-34350 Project: Flink Issue Type: Improvement Reporter: Jane Chan For en docs, Hugo can detect changes and automatically rebuild the site. However, for zh docs, it does not work. >From the console, we can see that Hugo does detect the changes. {code:java} Source changed WRITE|CHMOD "/Users/jane.cjm/GitHub/flink/docs/content.zh/docs/dev/table/sql/queries/window-tvf.md" {code} However, the changes cannot be applied to the site automatically. I have to stop and reinvoke `./build_docs.sh` to make it work. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34349) Release Testing: Verify FLINK-34219 Introduce a new join operator to support minibatch
Shuai Xu created FLINK-34349: Summary: Release Testing: Verify FLINK-34219 Introduce a new join operator to support minibatch Key: FLINK-34349 URL: https://issues.apache.org/jira/browse/FLINK-34349 Project: Flink Issue Type: Sub-task Components: Table SQL / Runtime Affects Versions: 1.19.0 Reporter: Shuai Xu Assignee: Shuai Xu Fix For: 1.19.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34347) Kubernetes native resource manager request wrong spec.
Ruibin Xing created FLINK-34347: --- Summary: Kubernetes native resource manager request wrong spec. Key: FLINK-34347 URL: https://issues.apache.org/jira/browse/FLINK-34347 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes, Kubernetes Operator Affects Versions: kubernetes-operator-1.6.1, 1.18.0 Reporter: Ruibin Xing Attachments: jobmanager.csv, taskmanager_octopus-16-323-octopus-engine-write-proxy-taskmanager-3-326.csv We had a flink spec in which TM cpu is set to 0.5, then we upgraded it to 4.0. We found the job manager requesting both TM with 0.5 CPU and 4 CPU. Most TMs with 0.5 CPU was released soon, however there was 1 TM with 0.5 CPU remained and caused lag in job. Logs for mixed TM requests: {code:java} 2024-02-03 10:10:41,414 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Requested worker octopus-16-323-octopus-engine-write-proxy-taskmanager-3-244 with resource spec WorkerResourceSpec {cpuCores=4.0, taskHeapSize=5.637gb (6053219520 bytes), taskOffHeapSize=1024.000mb (1073741824 bytes), networkMemSize=64.000mb (67108864 bytes), managedMemSize=0 bytes, numSlots=4}.02-03 18:10:44.8442024-02-03 10:10:44,844 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Requesting new worker with resource spec WorkerResourceSpec {cpuCores=0.5, taskHeapSize=1.137gb (1221381320 bytes), taskOffHeapSize=1024.000mb (1073741824 bytes), networkMemSize=64.000mb (67108864 bytes), managedMemSize=0 bytes, numSlots=4}, current pending count: 1.02-03 18:10:44.9202024-02-03 10:10:44,920 INFO org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Requesting new worker with resource spec WorkerResourceSpec {cpuCores=0.5, taskHeapSize=1.137gb (1221381320 bytes), taskOffHeapSize=1024.000mb (1073741824 bytes), networkMemSize=64.000mb (67108864 bytes), managedMemSize=0 bytes, numSlots=4}, current pending count: 2.02-03 18:10:44.942 {code} The name of wrong TM: octopus-16-323-octopus-engine-write-proxy-taskmanager-3-326. Relevant logs are attached. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34348) Release Testing: Verify FLINK-20281 Window aggregation supports changelog stream input
xuyang created FLINK-34348: -- Summary: Release Testing: Verify FLINK-20281 Window aggregation supports changelog stream input Key: FLINK-34348 URL: https://issues.apache.org/jira/browse/FLINK-34348 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.19.0 Reporter: xuyang Assignee: xuyang Fix For: 1.19.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34346) Release Testing: Verify FLINK-24024 Support session Window TVF
xuyang created FLINK-34346: -- Summary: Release Testing: Verify FLINK-24024 Support session Window TVF Key: FLINK-34346 URL: https://issues.apache.org/jira/browse/FLINK-34346 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.19.0 Reporter: xuyang Assignee: xuyang Fix For: 1.19.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [DISCUSS] FLIP-409: DataStream V2 Building Blocks: DataStream, Partitioning and ProcessFunction
Hi Xuannan and Xintong, Good point! After further consideration, I feel that we should make the Broadcast + NonKeyed/Keyed process function different from the normal TwoInputProcessFunction. Because the record from the broadcast input indeed correspond to all partitions, while the record from the non-broadcast edge have explicit partitions. When we consider the data of broadcast input, it is only valid to do something on all the partitions at once, such as things like `applyToKeyedState`. Similarly, other operations(e.g, endOfInput) that do not determine the current partition should also only be allowed to perform on all partitions. This FLIP has been updated. Best regards, Weijie Xintong Song 于2024年2月1日周四 11:31写道: > OK, I see your point. > > I think the demand for updating states and emitting outputs upon receiving > a broadcast record makes sense. However, the way > `KeyedBroadcastProcessFunction` supports this may not be optimal. E.g., if > `Collector#collect` is called in `processBroadcastElement` but outside of > `Context#applyToKeyedState`, the result can be undefined. > > Currently in this FLIP, a `TwoInputStreamProcessFunction` is not aware of > which input is KeyedStream and which is BroadcastStream, which makes > supporting things like `applyToKeyedState` difficult. I think we can > provide a built-in function similar to `KeyedBroadcastProcessFunction` on > top of `TwoInputStreamProcessFunction` to address this demand. > > WDYT? > > > Best, > > Xintong > > > > On Thu, Feb 1, 2024 at 10:41 AM Xuannan Su wrote: > > > Hi Weijie and Xingtong, > > > > Thanks for the reply! Please see my comments below. > > > > > Does this mean if we want to support (KeyedStream, BroadcastStream) -> > > > (KeyedStream), we must make sure that no data can be output upon > > processing > > > records from the input BroadcastStream? That's probably a reasonable > > > limitation. > > > > I don't think that the requirement for supporting (KeyedStream, > > BroadcastStream) -> (KeyedStream) is that no data can be output upon > > processing the BroadcastStream. For instance, in the current > > `KeyedBroadcastProcessFunction`, we use Context#applyToKeyedState to > > produce output results, which can be keyed in the same manner as the > > keyed input stream, upon processing data from the BroadcastStream. > > Therefore, I believe it only requires that the user must ensure that > > the output is keyed in the same way as the input, in this case, the > > same way as the keyed input stream. I think this requirement is > > consistent with that of (KeyedStream, KeyedStream) -> (KeyedStream). > > Thus, I believe that supporting (KeyedStream, BroadcastStream) -> > > (KeyedStream) will not introduce complexity for the users. WDYT? > > > > Best regards, > > Xuannan > > > > > > On Tue, Jan 30, 2024 at 3:12 PM weijie guo > > wrote: > > > > > > Hi Xintong, > > > > > > Thanks for your reply. > > > > > > > Does this mean if we want to support (KeyedStream, BroadcastStream) > -> > > > (KeyedStream), we must make sure that no data can be output upon > > processing > > > records from the input BroadcastStream? That's probably a reasonable > > > limitation. > > > > > > I think so, this is the restriction that has to be imposed in order to > > > avoid re-partition(i.e. shuffle). > > > If one just want to get a keyed-stream and don't care about the data > > > distribution, then explicit KeyBy partitioning works as expected. > > > > > > > The problem is would this limitation be too implicit for the users to > > > understand. > > > > > > Since we can't check for this limitation at compile time, if we were to > > add > > > support for this case, we would have to introduce additional runtime > > checks > > > to ensure program correctness. For now, I'm inclined not to support it, > > as > > > it's hard for users to understand this restriction unless we have > > something > > > better. And we can always add it later if we do realize there's a > strong > > > demand for it. > > > > > > > 1. I'd suggest renaming the method with timestamp to something like > > > `collectAndOverwriteTimestamp`. That might help users understand that > > they > > > don't always need to call this method, unless they explicitly want to > > > overwrite the timestamp. > > > > > > Make sense, I have updated this FLIP toward this new method name. > > > > > > > 2. While this method provides a way to set timestamps, how would > users > > > read > > > timestamps from the records? > > > > > > Ah, good point. I will introduce a new method to get the timestamp of > the > > > current record in RuntimeContext. > > > > > > > > > Best regards, > > > > > > Weijie > > > > > > > > > Xintong Song 于2024年1月30日周二 14:04写道: > > > > > > > Just trying to understand. > > > > > > > > > Is there a particular reason we do not support a > > > > > `TwoInputProcessFunction` to combine a KeyedStream with a > > > > > BroadcastStream to result in a KeyedStream? There seems to be a > valid > > > > > use cas
Re: [ANNOUNCE] Flink 1.19 feature freeze & sync summary on 01/30/2024
Hi release managers, > The feature freeze of 1.19 has started now. That means that no new features > or improvements should now be merged into the master branch unless you ask > the release managers first, which has already been done for PRs, or pending > on CI to pass. Bug fixes and documentation PRs can still be merged. I'm curious whether the code cleanup could be merged? FLINK-31449[1] removed DeclarativeSlotManager related logic. Some other classes are not used anymore after FLINK-31449. FLINK-34345[2][3] will remove them. I checked these classes are not used in the master branch. And the PR[3] is reviewed for now, could I merge it now or after flink-1.19? Looking forward to your feedback, thanks~ [1] https://issues.apache.org/jira/browse/FLINK-31449 [2] https://issues.apache.org/jira/browse/FLINK-34345 [3] https://github.com/apache/flink/pull/24257 Best, Rui On Wed, Jan 31, 2024 at 5:20 PM Lincoln Lee wrote: > Hi Matthias, > > Thanks for letting us know! After discussed with 1.19 release managers, we > agreed to merge these pr. > > Thank you for the work on GHA workflows! > > Best, > Yun, Jing, Martijn and Lincoln > > > Matthias Pohl 于2024年1月30日周二 22:20写道: > > > Thanks for the update, Lincoln. > > > > fyi: I merged FLINK-32684 (deprecating AkkaOptions) [1] since we agreed > in > > today's meeting that this change is still ok to go in. > > > > The beta version of the GitHub Actions workflows (FLIP-396 [2]) are also > > finalized (see related PRs for basic CI [3], nightly master [4] and > nightly > > scheduling [5]). I'd like to merge the changes before creating the > > release-1.19 branch. That would enable us to see whether we miss anything > > in the GHA workflows setup when creating a new release branch. > > > > The changes are limited to a few CI scripts that are also used for Azure > > Pipelines (see [3]). The majority of the changes are GHA-specific and > > shouldn't affect the Azure Pipelines CI setup. > > > > Therefore, I'm requesting the approval from the 1.19 release managers to > > go ahead with merging the mentioned PRs [3, 4, 5]. > > > > Matthias > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-32684 > > [2] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink%27s+current+Azure+CI+infrastructure > > [3] https://github.com/apache/flink/pull/23970 > > [4] https://github.com/apache/flink/pull/23971 > > [5] https://github.com/apache/flink/pull/23972 > > > > On Tue, Jan 30, 2024 at 1:51 PM Lincoln Lee > > wrote: > > > >> Hi everyone, > >> > >> (Since feature freeze and release sync are on the same day, we merged > the > >> announcement and sync summary together) > >> > >> > >> *- Feature freeze* > >> The feature freeze of 1.19 has started now. That means that no new > >> features > >> or improvements should now be merged into the master branch unless you > ask > >> the release managers first, which has already been done for PRs, or > >> pending > >> on CI to pass. Bug fixes and documentation PRs can still be merged. > >> > >> > >> *- Cutting release branch* > >> Currently we have three blocker issues[1][2][3], and will try to close > >> them this Friday. > >> We are planning to cut the release branch on next Monday (Feb 6th) if no > >> new test instabilities, > >> and we'll make another announcement in the dev mailing list then. > >> > >> > >> *- Cross-team testing* > >> Release testing is expected to start next week as soon as we cut the > >> release branch. > >> As a prerequisite, please Before we start testing, please make sure > >> 1. Whether the feature needs a cross-team testing > >> 2. If yes, please the documentation completed > >> There's an umbrella ticket[4] for tracking the 1.19 testing, RM will > >> create all tickets for completed features listed on the 1.19 wiki > page[5] > >> and assign to the feature's Responsible Contributor, > >> also contributors are encouraged to create tickets following the steps > in > >> the umbrella ticket if there are other ones that need to be cross-team > >> tested. > >> > >> *- Release notes* > >> > >> All new features and behavior changes require authors to fill out the > >> 'Release Note' column in the JIRA(click the Edit button and pull the > page > >> to the center), > >> especially since 1.19 involves a lot of deprecation, which is important > >> for users and will be part of the release announcement. > >> > >> - *Sync meeting* (https://meet.google.com/vcx-arzs-trv) > >> > >> We've already switched to weekly release sync, so the next release sync > >> will be on Feb 6th, 2024. Feel free to join us! > >> > >> [1] https://issues.apache.org/jira/browse/FLINK-34148 > >> [2] https://issues.apache.org/jira/browse/FLINK-34007 > >> [3] https://issues.apache.org/jira/browse/FLINK-34259 > >> [4] https://issues.apache.org/jira/browse/FLINK-34285 > >> [5] https://cwiki.apache.org/confluence/display/FLINK/1.19+Release > >> > >> Best, > >> Yun, Jing, Martijn and Li
Community over Code EU 2024 Travel Assistance Applications now open!
Hello to all users, contributors and Committers! The Travel Assistance Committee (TAC) are pleased to announce that travel assistance applications for Community over Code EU 2024 are now open! We will be supporting Community over Code EU, Bratislava, Slovakia, June 3th - 5th, 2024. TAC exists to help those that would like to attend Community over Code events, but are unable to do so for financial reasons. For more info on this years applications and qualifying criteria, please visit the TAC website at < https://tac.apache.org/ >. Applications are already open on https://tac-apply.apache.org/, so don't delay! The Apache Travel Assistance Committee will only be accepting applications from those people that are able to attend the full event. Important: Applications close on Friday, March 1st, 2024. Applicants have until the the closing date above to submit their applications (which should contain as much supporting material as required to efficiently and accurately process their request), this will enable TAC to announce successful applications shortly afterwards. As usual, TAC expects to deal with a range of applications from a diverse range of backgrounds; therefore, we encourage (as always) anyone thinking about sending in an application to do so ASAP. For those that will need a Visa to enter the Country - we advise you apply now so that you have enough time in case of interview delays. So do not wait until you know if you have been accepted or not. We look forward to greeting many of you in Bratislava, Slovakia in June, 2024! Kind Regards, Gavin (On behalf of the Travel Assistance Committee)