Re: [VOTE] Release 1.13.1, release candidate #1

2021-05-26 Thread 张静
Hi Dawid,
I see. Thanks for the explanation.

Best,
JING ZHANG

Dawid Wysakowicz  于2021年5月26日周三 下午11:06写道:
>
> Hi Jing Zhang,
>
> Yes, it is expected that the JIRA page might contain open issues. All
> open issues will be moved to the next version once we mark the 1.13.1
> version as released automatically.
>
> Best,
>
> Dawid
>
> On 26/05/2021 15:59, 张静 wrote:
> > Hi Dawid,
> >   +1.  Thank you for driving this release.
> >   I checked:
> >   1. Built from source code flink-1.13.1-src.tgz succeeded
> >   2. Started a local Flink cluster based on built 1.13.1 in the Step1,
> > ran the WordCount example, WebUI looks good, no suspicious output/log.
> >   3. Repeat Step 3 with flink-1.13.1-bin-scala_2.11.tgz
> >   4. Started a local Flink cluster and run some sql queries using SQL
> > Client, query result is expected.
> >   5. Found error message contains blue strange characters(e.g '^' and
> > '[') when initialize SQL Client session using invalid SQL file, will
> > improve it later
> >
> > BTW: [JIRA release
> > notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058)
> > contains some in-progressing jira (e.g FLINK-22726, FLINK-22759,
> > FLINK-13856), is it expected?
> >
> > Best,
> > JING ZHANG
> >
> >
> >
> >
> >  I checked the following things:
> >
> > Roc Marshal  于2021年5月26日周三 下午7:31写道:
> >> Hi Dawid,
> >>
> >>
> >>
> >>
> >> +1.
> >>
> >>
> >> Thank you for driving this release.
> >>
> >>
> >>
> >>
> >>  I checked the following things:
> >>
> >>
> >>
> >>
> >> - downloaded and build source code
> >> - started cluster and succeed in jobs (flink applications defined by me)
> >> - checked diff of pom files between 1.13.1-rc1 and 1.13.0
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> Best, Roc.
> >>
> >>
> >>
> >>
> >> 在 2021-05-25 21:32:46,"Dawid Wysakowicz"  写道:
> >>
> >> Hi everyone,
> >> Please review and vote on the release candidate #1 for the version 1.13.1, 
> >> as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific comments)
> >>
> >>
> >> The complete staging area is available for your review, which includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release and binary convenience releases to be 
> >> deployed to dist.apache.org [2], which are signed with the key with 
> >> fingerprint 31D2DD10BFC15A2D [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag "release-1.13.1-rc1" [5],
> >> * website pull request listing the new release and adding announcement 
> >> blog post [6].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by majority 
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Best,
> >> Dawid
> >>
> >> [1] 
> >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058
> >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.1-rc1/
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4] https://repository.apache.org/content/repositories/orgapacheflink-1422/
> >> [5] https://github.com/apache/flink/tree/release-1.13.1-rc1
> >> [6] https://github.com/apache/flink-web/pull/448
>


Re: [DISCUSS] Watermark propagation with Sink API

2021-05-26 Thread Eron Wright
Arvid,

1. I assume that the method name `invoke` stems from considering the
SinkFunction to be a functional interface, but is otherwise meaningless.
Keeping it as `writeWatermark` does keep it symmetric with SinkWriter.  My
vote is to leave it.  You decide.

2+3. I too considered adding a `WatermarkContext`, but it would merely be a
placeholder.  I don't anticipate any context info in future.  As we see
with invoke, it is possible to add a context later in a
backwards-compatible way.  My vote is to not introduce a context.  You
decide.

4. No anticipated compatibility issues.

5. Short answer, it works as expected.  The new methods are invoked
whenever the underlying operator receives a watermark.  I do believe that
batch and ingestion time applications receive watermarks. Seems the
programming model is more unified in that respect since 1.12 (FLIP-134).

6. The failure behavior is the same as for elements.

Thanks,
Eron

On Tue, May 25, 2021 at 12:42 PM Arvid Heise  wrote:

> Hi Eron,
>
> I think the FLIP is crisp and mostly good to go. Some smaller
> things/questions:
>
>1. SinkFunction#writeWatermark could be named
>SinkFunction#invokeWatermark or invokeOnWatermark to keep it symmetric.
>2. We could add the context parameter to both. For SinkWriter#Context,
>we currently do not gain much. SinkFunction#Context also exposes
> processing
>time, which may or may not be handy and is currently mostly used for
>StreamingFileSink bucket policies. We may add that processing time flag
>also to SinkWriter#Context in the future.
>3. Alternatively, we could also add a different context parameter just
>to keep the API stable while allowing additional information to be
> passed
>in the future.
>4. Would we run into any compatibility issue if we use Flink 1.13 source
>in Flink 1.14 (with this FLIP) or vice versa?
>5. What happens with sinks that use the new methods in applications that
>do not have watermarks (batch mode, processing time)? Does this also
> work
>with ingestion time sufficiently?
>6. How do exactly once sinks deal with written watermarks in case of
>failure? I guess it's the same as normal records. (Either rollback of
>transaction or deduplication on resumption)
>
> Best,
>
> Arvid
>
> On Tue, May 25, 2021 at 6:44 PM Eron Wright  .invalid>
> wrote:
>
> > Does anyone have further comment on FLIP-167?
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-167%3A+Watermarks+for+Sink+API
> >
> > Thanks,
> > Eron
> >
> >
> > On Thu, May 20, 2021 at 5:02 PM Eron Wright 
> > wrote:
> >
> > > Filed FLIP-167: Watermarks for Sink API:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-167%3A+Watermarks+for+Sink+API
> > >
> > > I'd like to call a vote next week, is that reasonable?
> > >
> > >
> > > On Wed, May 19, 2021 at 6:28 PM Zhou, Brian  wrote:
> > >
> > >> Hi Arvid and Eron,
> > >>
> > >> Thanks for the discussion and I read through Eron's pull request and I
> > >> think this can benefit Pravega Flink connector as well.
> > >>
> > >> Here is some background. Pravega had the watermark concept through the
> > >> event stream since two years ago, and here is a blog introduction[1]
> for
> > >> Pravega watermark.
> > >> Pravega Flink connector also had this watermark integration last year
> > >> that we wanted to propagate the Flink watermark to Pravega in the
> > >> SinkFunction, and at that time we just used the existing Flink API
> that
> > we
> > >> keep the last watermark in memory and check if watermark changes for
> > each
> > >> event[2] which is not efficient. With such new interface, we can also
> > >> manage the watermark propagation much more easily.
> > >>
> > >> [1] https://pravega.io/blog/2019/11/08/pravega-watermarking-support/
> > >> [2]
> > >>
> >
> https://github.com/pravega/flink-connectors/blob/master/src/main/java/io/pravega/connectors/flink/FlinkPravegaWriter.java#L465
> > >>
> > >> -Original Message-
> > >> From: Arvid Heise 
> > >> Sent: Wednesday, May 19, 2021 16:06
> > >> To: dev
> > >> Subject: Re: [DISCUSS] Watermark propagation with Sink API
> > >>
> > >>
> > >> [EXTERNAL EMAIL]
> > >>
> > >> Hi Eron,
> > >>
> > >> Thanks for pushing that topic. I can now see that the benefit is even
> > >> bigger than I initially thought. So it's worthwhile anyways to include
> > that.
> > >>
> > >> I also briefly thought about exposing watermarks to all UDFs, but
> here I
> > >> really have an issue to see specific use cases. Could you maybe take a
> > few
> > >> minutes to think about it as well? I could only see someone misusing
> > Async
> > >> IO as a sink where a real sink would be more appropriate. In general,
> if
> > >> there is not a clear use case, we shouldn't add the functionality as
> > it's
> > >> just increased maintenance for no value.
> > >>
> > >> If we stick to the plan, I think your PR is already in a good shape.
> We
> > >> need to create a FLIP for it though, 

Re: [VOTE] Release 1.13.1, release candidate #1

2021-05-26 Thread Dawid Wysakowicz
Hi Jing Zhang,

Yes, it is expected that the JIRA page might contain open issues. All
open issues will be moved to the next version once we mark the 1.13.1
version as released automatically.

Best,

Dawid

On 26/05/2021 15:59, 张静 wrote:
> Hi Dawid,
>   +1.  Thank you for driving this release.
>   I checked:
>   1. Built from source code flink-1.13.1-src.tgz succeeded
>   2. Started a local Flink cluster based on built 1.13.1 in the Step1,
> ran the WordCount example, WebUI looks good, no suspicious output/log.
>   3. Repeat Step 3 with flink-1.13.1-bin-scala_2.11.tgz
>   4. Started a local Flink cluster and run some sql queries using SQL
> Client, query result is expected.
>   5. Found error message contains blue strange characters(e.g '^' and
> '[') when initialize SQL Client session using invalid SQL file, will
> improve it later
>
> BTW: [JIRA release
> notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058)
> contains some in-progressing jira (e.g FLINK-22726, FLINK-22759,
> FLINK-13856), is it expected?
>
> Best,
> JING ZHANG
>
>
>
>
>  I checked the following things:
>
> Roc Marshal  于2021年5月26日周三 下午7:31写道:
>> Hi Dawid,
>>
>>
>>
>>
>> +1.
>>
>>
>> Thank you for driving this release.
>>
>>
>>
>>
>>  I checked the following things:
>>
>>
>>
>>
>> - downloaded and build source code
>> - started cluster and succeed in jobs (flink applications defined by me)
>> - checked diff of pom files between 1.13.1-rc1 and 1.13.0
>>
>>
>>
>>
>>
>>
>>
>> Best, Roc.
>>
>>
>>
>>
>> 在 2021-05-25 21:32:46,"Dawid Wysakowicz"  写道:
>>
>> Hi everyone,
>> Please review and vote on the release candidate #1 for the version 1.13.1, 
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>>
>>
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release and binary convenience releases to be 
>> deployed to dist.apache.org [2], which are signed with the key with 
>> fingerprint 31D2DD10BFC15A2D [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag "release-1.13.1-rc1" [5],
>> * website pull request listing the new release and adding announcement blog 
>> post [6].
>>
>> The vote will be open for at least 72 hours. It is adopted by majority 
>> approval, with at least 3 PMC affirmative votes.
>>
>> Best,
>> Dawid
>>
>> [1] 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058
>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.1-rc1/
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4] https://repository.apache.org/content/repositories/orgapacheflink-1422/
>> [5] https://github.com/apache/flink/tree/release-1.13.1-rc1
>> [6] https://github.com/apache/flink-web/pull/448



OpenPGP_signature
Description: OpenPGP digital signature


Re: [VOTE] Release 1.13.1, release candidate #1

2021-05-26 Thread 张静
Hi Dawid,
  +1.  Thank you for driving this release.
  I checked:
  1. Built from source code flink-1.13.1-src.tgz succeeded
  2. Started a local Flink cluster based on built 1.13.1 in the Step1,
ran the WordCount example, WebUI looks good, no suspicious output/log.
  3. Repeat Step 3 with flink-1.13.1-bin-scala_2.11.tgz
  4. Started a local Flink cluster and run some sql queries using SQL
Client, query result is expected.
  5. Found error message contains blue strange characters(e.g '^' and
'[') when initialize SQL Client session using invalid SQL file, will
improve it later

BTW: [JIRA release
notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058)
contains some in-progressing jira (e.g FLINK-22726, FLINK-22759,
FLINK-13856), is it expected?

Best,
JING ZHANG




 I checked the following things:

Roc Marshal  于2021年5月26日周三 下午7:31写道:
>
> Hi Dawid,
>
>
>
>
> +1.
>
>
> Thank you for driving this release.
>
>
>
>
>  I checked the following things:
>
>
>
>
> - downloaded and build source code
> - started cluster and succeed in jobs (flink applications defined by me)
> - checked diff of pom files between 1.13.1-rc1 and 1.13.0
>
>
>
>
>
>
>
> Best, Roc.
>
>
>
>
> 在 2021-05-25 21:32:46,"Dawid Wysakowicz"  写道:
>
> Hi everyone,
> Please review and vote on the release candidate #1 for the version 1.13.1, as 
> follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be 
> deployed to dist.apache.org [2], which are signed with the key with 
> fingerprint 31D2DD10BFC15A2D [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.13.1-rc1" [5],
> * website pull request listing the new release and adding announcement blog 
> post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority 
> approval, with at least 3 PMC affirmative votes.
>
> Best,
> Dawid
>
> [1] 
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.1-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1422/
> [5] https://github.com/apache/flink/tree/release-1.13.1-rc1
> [6] https://github.com/apache/flink-web/pull/448


[jira] [Created] (FLINK-22785) Kafka transaction failing when there are two producers

2021-05-26 Thread Ygor Allan de Fraga (Jira)
Ygor Allan de Fraga created FLINK-22785:
---

 Summary: Kafka transaction failing when there are two producers
 Key: FLINK-22785
 URL: https://issues.apache.org/jira/browse/FLINK-22785
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.12.1
Reporter: Ygor Allan de Fraga


Whenever there are two producers in the same Kafka Topic, an error related to 
the transaction appears for one of the Flink Applications.

 
{code:java}
 org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an 
operation with an old epoch. Either there is a newer producer with the same 
transactionalId, or the producer's transaction has been expired by the 
broker.{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22784) Jepsen tests broken due to change in zNode layout

2021-05-26 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-22784:


 Summary: Jepsen tests broken due to change in zNode layout
 Key: FLINK-22784
 URL: https://issues.apache.org/jira/browse/FLINK-22784
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.14.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.14.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22783) Flink Jira Bot effectively does not apply all rules anymore due to throtteling

2021-05-26 Thread Konstantin Knauf (Jira)
Konstantin Knauf created FLINK-22783:


 Summary: Flink Jira Bot effectively does not apply all rules 
anymore due to throtteling
 Key: FLINK-22783
 URL: https://issues.apache.org/jira/browse/FLINK-22783
 Project: Flink
  Issue Type: Bug
Reporter: Konstantin Knauf


The Flink Jira Bot is only allowed to update a certain number of tickets per 
run. They way this has been implemented it first the bot only considers the 
first n tickets returned for a given filter. *Afterwards* there is an 
additional filter that filters out tickets that have updated subtasks.

This can lead to situation where the bot does not make progress anymore, 
because it always considers the first 10 tickets, all of which have updated 
Sub-Tasks. So, no tickets are updated although the rule would apply to some 
tickets.

The effect can be seen 
https://github.com/apache/flink-jira-bot/runs/2674419533?check_suite_focus=true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re:[VOTE] Release 1.13.1, release candidate #1

2021-05-26 Thread Roc Marshal
Hi Dawid,




+1.


Thank you for driving this release.




 I checked the following things:




- downloaded and build source code
- started cluster and succeed in jobs (flink applications defined by me) 
- checked diff of pom files between 1.13.1-rc1 and 1.13.0







Best, Roc.




在 2021-05-25 21:32:46,"Dawid Wysakowicz"  写道:

Hi everyone,
Please review and vote on the release candidate #1 for the version 1.13.1, as 
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
 
 
The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be 
deployed to dist.apache.org [2], which are signed with the key with fingerprint 
31D2DD10BFC15A2D [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.13.1-rc1" [5],
* website pull request listing the new release and adding announcement blog 
post [6].
 
The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.
 
Best,
Dawid
 
[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.1-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1422/
[5] https://github.com/apache/flink/tree/release-1.13.1-rc1
[6] https://github.com/apache/flink-web/pull/448

Re: [DISCUSS] Moving to JUnit5

2021-05-26 Thread Joern Kottmann
>From a user perspective it would be nice if it is possible to use
Junit 5 also for integration tests using MiniClusterWithClientResource
[1].

I would be happy to help you with the migration of a few modules, if
you need a hand with it.

Regards,
Jörn

[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/testing/#testing-flink-jobs

On Tue, May 25, 2021 at 10:33 AM Till Rohrmann  wrote:
>
> Thanks for joining the discussion Qingsheng. In general, I am not opposed
> to upgrading our testing library to JUnit 5. Also, the idea of starting
> with individual modules and do it incrementally sounds reasonable.
>
> However, before starting to do it like this, the community should agree
> that we want to replace JUnit 4 with JUnit 5 eventually. This does not mean
> to rewrite all existing tests to use JUnit 5 but at least the default for
> all new tests should be JUnit 5 at some point. Otherwise, I fear that we
> will fragment the project into modules that use JUnit 5 and those that use
> JUnit 4. If this happens then it becomes harder for people to work on the
> code base because they always need to know which testing library to use in
> which module.
>
> Cheers,
> Till
>
> On Tue, May 25, 2021 at 8:53 AM Qingsheng Ren  wrote:
>
> > Hi forks,
> >
> > I’d like to resume the discussion on migrating to JUnit 5. I’ve been
> > working on a connector testing framework and recently have an exploration
> > on JUnit 5. I think some features are very helpful for the development of
> > the testing framework:
> >
> > • Extensions
> >
> > JUnit 5 introduces a new Extension model, which provide a pluggable
> > mechanism for extending test classes, like managing test lifecycles and
> > providing parameters. Also with the help of extension, we can get rid of
> > some limitations introduced by class inheritance, like current TestLogger &
> > KafkaTestBase. In testing framework this is helpful for handling the
> > lifecycle of Flink cluster and external system.
> >
> > • Annotations
> >
> > JUnit 5 provides better support in annotations, working together with
> > extensions. We can simple mark types/fields/methods in the test, then let
> > extension to search these elements and manage their lifecycle in the test.
> > For example test with annotation @MiniCluster will be provided with a
> > lifecycle-managed MiniCluster automatically.
> >
> > • Parameterized tests
> >
> > JUnit 5 supports more powerful parameterized tests. Testing framework uses
> > this to inject different test environments and external contexts into the
> > same test case, to run the case under different scenarios.
> >
> > So I think JUnit 5 is quite flexible for developing such a framework or
> > test utility based on it. My suggestion is that we can take connector
> > testing framework as a starting point of using JUnit 5, then we can expand
> > our exploration to more modules, finally dive into the entire project.
> >
> > --
> > Best Regards,
> >
> > Qingsheng Ren
> > Email: renqs...@gmail.com
> > On Dec 1, 2020, 4:54 PM +0800, Khachatryan Roman , wrote:
> > > +1 for the migration
> > >
> > > (I agree with Dawid, for me the most important benefit is better support
> > of
> > > parameterized tests).
> > >
> > > Regards,
> > > Roman
> > >
> > >
> > > On Mon, Nov 30, 2020 at 9:42 PM Arvid Heise  wrote:
> > >
> > > > Hi Till,
> > > >
> > > > immediate benefit would be mostly nested tests for a better test
> > structure
> > > > and new parameterized tests for less clutter (often test functionality
> > is
> > > > split into parameterized test and non-parameterized test because of
> > JUnit4
> > > > limitation). Additionally, having Java8 lambdas to perform fine-grain
> > > > exception handling would make all related tests more readable (@Test
> > only
> > > > allows one exception per test method, while in reality we often have
> > more
> > > > exceptions / more fine grain assertions and need to resort to
> > try-catch --
> > > > yuck!). The extension mechanism would also make the mini cluster much
> > > > easier to use: we often have to start the cluster manually because of
> > > > test-specific configuration, which can be easily avoided in JUnit5.
> > > >
> > > > In the medium and long-term, I'd also like to use the modular
> > > > infrastructure and improved parallelization. The former would allow us
> > > > better to implement cross-cutting features like TestLogger (why do we
> > need
> > > > to extend that manually in every test?). The latter is more relevant
> > for
> > > > the next push on CI, which would be especially interesting with e2e
> > being
> > > > available in Java.
> > > >
> > > > On Mon, Nov 30, 2020 at 2:07 PM Dawid Wysakowicz <
> > dwysakow...@apache.org>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Just wanted to express my support for the idea. I did miss certain
> > > > > features of JUnit 5 already, an important one being much better
> > support
> > > > > for parameterized tests.
> > > > >
> > 

[jira] [Created] (FLINK-22782) Remove legacy planner from Chinese docs

2021-05-26 Thread Timo Walther (Jira)
Timo Walther created FLINK-22782:


 Summary: Remove legacy planner from Chinese docs
 Key: FLINK-22782
 URL: https://issues.apache.org/jira/browse/FLINK-22782
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table SQL / Legacy Planner
Reporter: Timo Walther


FLINK-22740 should also be applied to Chinese docs.

It should remove:
 * Remove reference of {{useBlink/LegacyPlanner}}
 * Remove {{DataSet}}
 * Remove legacy planner mentioning



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22781) Incorrect result for group window aggregate when mini-batch is enabled

2021-05-26 Thread godfrey he (Jira)
godfrey he created FLINK-22781:
--

 Summary: Incorrect result for group window aggregate when 
mini-batch is enabled
 Key: FLINK-22781
 URL: https://issues.apache.org/jira/browse/FLINK-22781
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: godfrey he


We can reproduce this issue through adding the following code to 
{{GroupWindowITCase#testWindowAggregateOnUpsertSource}} method:

{code:java}
tEnv.getConfig.getConfiguration.setBoolean(
  ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, true)
tEnv.getConfig.getConfiguration.set(
  ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(1))
tEnv.getConfig.getConfiguration.setLong(
  ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, 10L)
{code}

The reason is the group window without any data (the data may be retracted) 
should not send any record.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release 1.13.1, release candidate #1

2021-05-26 Thread Matthias Pohl
Hi Dawid,
+1 (non-binding)

Thanks for driving this release. I checked the following things:
- downloaded and build source code
- verified checksums
- double-checked diff of pom files between 1.13.0 and 1.13.1-rc1
- did a visual check of the release blog post
- started cluster and ran jobs (WindowJoin and WordCount); nothing
suspicious found in the logs
- verified change FLINK-22866 manually whether the issue is fixed

Best,
Matthias

On Tue, May 25, 2021 at 3:33 PM Dawid Wysakowicz 
wrote:

> Hi everyone,
> Please review and vote on the release candidate #1 for the version 1.13.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 31D2DD10BFC15A2D [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.13.1-rc1" [5],
> * website pull request listing the new release and adding announcement
> blog post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Best,
> Dawid
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350058
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.13.1-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1422/
> [5] https://github.com/apache/flink/tree/release-1.13.1-rc1
> [6] https://github.com/apache/flink-web/pull/448
>


[jira] [Created] (FLINK-22780) Performance regression on 25.05

2021-05-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-22780:


 Summary: Performance regression on 25.05
 Key: FLINK-22780
 URL: https://issues.apache.org/jira/browse/FLINK-22780
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.14.0


Tests such as:
* multiInputMapSink
* multiInputOneIdleMapSink
* readFileSplit

show regressions.

Regression in run for range: 80ad5b3b51-bb597ea-1621977169

It is most probably caused by: 
https://github.com/apache/flink/commit/ee9f9b227a7703c2688924070c4746a70bff3fd8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)