Hi all,

I am here to update the progress of the issue that needs to be tracked:

- FLINK-14010 (merged)
- FLINK-14118 (under discussion whether we should back port it to 1.9)
- FLINK-13386 (Daryl reviewed and Dawid will verify the functionality again)
- FLINK-13708 (under reviewing)
- FLINK-14145 (merged to master and need to merge it to 1.9 if we all agree)

Great thanks to all of you for helping fix and reviewing!
Ideally, we can kick off the release vote for the first RC early next week.

Best,
Jark

On Wed, 25 Sep 2019 at 01:25, Till Rohrmann <trohrm...@apache.org> wrote:

> FLINK-14010 has been merged.
>
> Cheers,
> Till
>
> On Tue, Sep 24, 2019 at 11:14 AM Gyula Fóra <gyula.f...@gmail.com> wrote:
>
> > +1 for 1.9.1 soon :)
> >
> > I would also like to include a fix to:
> > FLINK-14145 - getLatestCheckpoint(true) returns wrong checkpoint
> >
> > It is already merged to master and just need to merge it to 1.9 if we all
> > agree (https://github.com/apache/flink/pull/9756)
> >
> > Cheers,
> > Gyula
> >
> > On Tue, Sep 24, 2019 at 8:23 AM Terry Wang <zjuwa...@gmail.com> wrote:
> >
> > > +1 for the 1.9.1 release and for Jark being the RM.
> > > Thanks Jark for driving on this.
> > >
> > > Best,
> > > Terry Wang
> > >
> > >
> > >
> > > > 在 2019年9月24日,下午2:19,Jark Wu <imj...@gmail.com> 写道:
> > > >
> > > > Thanks Till for reviewing FLINK-14010.
> > > >
> > > > Hi Jeff, I think it makes sense to merge FLINK-13708 before the
> release
> > > (PR
> > > > has been reviewed).
> > > >
> > > > Hi Debasish, FLINK-12501 has already been merged in 1.10.0. I'm fine
> to
> > > > cherry-pick it to 1.9 if we
> > > > have a consensus this issue could be viewed as a bug. We can continue
> > the
> > > > discussion in the JIRA.
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > >
> > > > On Tue, 24 Sep 2019 at 13:39, Dian Fu <dian0511...@gmail.com> wrote:
> > > >
> > > >> +1 for 1.9.1 release and Jark being the RM. Thanks Jark for kicking
> > off
> > > >> this release and the volunteering.
> > > >>
> > > >> Regards,
> > > >> Dian
> > > >>
> > > >>> 在 2019年9月24日,上午10:45,Kurt Young <ykt...@gmail.com> 写道:
> > > >>>
> > > >>> +1 for the 1.9.1 release and for Jark being the RM.
> > > >>> Thanks Jark for the volunteering.
> > > >>>
> > > >>> Best,
> > > >>> Kurt
> > > >>>
> > > >>>
> > > >>> On Mon, Sep 23, 2019 at 9:17 PM Till Rohrmann <
> trohrm...@apache.org>
> > > >> wrote:
> > > >>>
> > > >>>> +1 for the 1.9.1 release and for Jark being the RM. I'll help with
> > the
> > > >>>> review of FLINK-14010.
> > > >>>>
> > > >>>> Cheers,
> > > >>>> Till
> > > >>>>
> > > >>>> On Mon, Sep 23, 2019 at 8:32 AM Debasish Ghosh <
> > > >> ghosh.debas...@gmail.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>>> I hope https://issues.apache.org/jira/browse/FLINK-12501 will
> also
> > > be
> > > >>>> part
> > > >>>>> of 1.9.1 ..
> > > >>>>>
> > > >>>>> regards.
> > > >>>>>
> > > >>>>> On Mon, Sep 23, 2019 at 11:39 AM Jeff Zhang <zjf...@gmail.com>
> > > wrote:
> > > >>>>>
> > > >>>>>> FLINK-13708 is also very critical IMO. This would cause invalid
> > > flink
> > > >>>> job
> > > >>>>>> (doubled output)
> > > >>>>>>
> > > >>>>>> https://issues.apache.org/jira/browse/FLINK-13708
> > > >>>>>>
> > > >>>>>> Jark Wu <imj...@gmail.com> 于2019年9月23日周一 下午2:03写道:
> > > >>>>>>
> > > >>>>>>> Hi everyone,
> > > >>>>>>>
> > > >>>>>>> It has already been a month since we released Flink 1.9.0.
> > > >>>>>>> We already have many important bug fixes from which our users
> can
> > > >>>>> benefit
> > > >>>>>>> in the release-1.9 branch (83 resolved issues).
> > > >>>>>>> Therefore, I propose to create the next bug fix release for
> Flink
> > > >>>> 1.9.
> > > >>>>>>>
> > > >>>>>>> Most notable fixes are:
> > > >>>>>>>
> > > >>>>>>> - [FLINK-13526] When switching to a non existing catalog or
> > > database
> > > >>>> in
> > > >>>>>> the
> > > >>>>>>> SQL Client the client crashes.
> > > >>>>>>> - [FLINK-13568] It is not possible to create a table with a
> > > "STRING"
> > > >>>>> data
> > > >>>>>>> type via the SQL DDL.
> > > >>>>>>> - [FLINK-13941] Prevent data-loss by not cleaning up small part
> > > files
> > > >>>>>> from
> > > >>>>>>> S3.
> > > >>>>>>> - [FLINK-13490][jdbc] If one column value is null when reading
> > > JDBC,
> > > >>>>> the
> > > >>>>>>> following values will all be null.
> > > >>>>>>> - [FLINK-14107][kinesis] When using event time alignment with
> the
> > > >>>>>> Kinsesis
> > > >>>>>>> Consumer the consumer might deadlock in one corner case.
> > > >>>>>>>
> > > >>>>>>> Furthermore, I would like the following critical issues to be
> > > merged
> > > >>>>>> before
> > > >>>>>>> 1.9.1 release:
> > > >>>>>>>
> > > >>>>>>> - [FLINK-14118] Reduce the unnecessary flushing when there is
> no
> > > data
> > > >>>>>>> available for flush which can save 20% ~ 40% CPU. (reviewing)
> > > >>>>>>> - [FLINK-13386] Fix A couple of issues with the new dashboard
> > have
> > > >>>>>> already
> > > >>>>>>> been filed. (PR is created, need review)
> > > >>>>>>> - [FLINK-14010][yarn] The Flink YARN cluster can get into an
> > > >>>>> inconsistent
> > > >>>>>>> state in some cases, where
> > > >>>>>>> leaderhship for JobManager, ResourceManager and Dispatcher
> > > components
> > > >>>>> is
> > > >>>>>>> split between two master processes. (PR is created, need
> review)
> > > >>>>>>>
> > > >>>>>>> I would volunteer as release manager and kick off the release
> > > process
> > > >>>>>> once
> > > >>>>>>> blocker issues has been merged. What do you think?
> > > >>>>>>>
> > > >>>>>>> If there is any other blocker issues need to be fixed in 1.9.1,
> > > >>>> please
> > > >>>>>> let
> > > >>>>>>> me know.
> > > >>>>>>>
> > > >>>>>>> Cheers,
> > > >>>>>>> Jark
> > > >>>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> --
> > > >>>>>> Best Regards
> > > >>>>>>
> > > >>>>>> Jeff Zhang
> > > >>>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>> --
> > > >>>>> Debasish Ghosh
> > > >>>>> http://manning.com/ghosh2
> > > >>>>> http://manning.com/ghosh
> > > >>>>>
> > > >>>>> Twttr: @debasishg
> > > >>>>> Blog: http://debasishg.blogspot.com
> > > >>>>> Code: http://github.com/debasishg
> > > >>>>>
> > > >>>>
> > > >>
> > > >>
> > >
> > >
> >
>

Reply via email to