Re: [RESULT] [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-22 Thread Tzu-Li (Gordon) Tai
@Chesnay

No. Users will have to manually build and install PyFlink themselves in
1.9.0:
https://ci.apache.org/projects/flink/flink-docs-release-1.9/flinkDev/building.html#build-pyflink

This is also mentioned in the announcement blog post (to-be-merged):
https://github.com/apache/flink-web/pull/244/files#diff-0cc840a590f5cab2485934278134c9baR291

On Thu, Aug 22, 2019 at 10:03 AM Chesnay Schepler 
wrote:

> Are we also releasing python artifacts for 1.9?
>
> On 21/08/2019 19:23, Tzu-Li (Gordon) Tai wrote:
> > I'm happy to announce that we have unanimously approved this candidate as
> > the 1.9.0 release.
> >
> > There are 12 approving votes, 5 of which are binding:
> > - Yu Li
> > - Zili Chen
> > - Gordon Tai
> > - Stephan Ewen
> > - Jark Wu
> > - Vino Yang
> > - Gary Yao
> > - Bowen Li
> > - Chesnay Schepler
> > - Till Rohrmann
> > - Aljoscha Krettek
> > - David Anderson
> >
> > There are no disapproving votes.
> >
> > Thanks everyone who has contributed to this release!
> >
> > I will wait until tomorrow morning for the artifacts to be available in
> > Maven central before announcing the release in a separate thread.
> >
> > The release blog post will also be merged tomorrow along with the
> official
> > announcement.
> >
> > Cheers,
> > Gordon
> >
> > On Wed, Aug 21, 2019, 5:37 PM David Anderson 
> wrote:
> >
> >> +1 (non-binding)
> >>
> >> I upgraded the flink-training-exercises project.
> >>
> >> I encountered a few rough edges, including problems in the docs, but
> >> nothing serious.
> >>
> >> I had to make some modifications to deal with changes in the Table API:
> >>
> >> ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
> >> TableEnvironment.getTableEnvironment became
> StreamTableEnvironment.create
> >> StreamTableDescriptorValidator.UPDATE_MODE() became
> >> StreamTableDescriptorValidator.UPDATE_MODE
> >> org.apache.flink.table.api.java.Slide moved to
> >> org.apache.flink.table.api.Slide
> >>
> >> I also found myself forced to change a CoProcessFunction to a
> >> KeyedCoProcessFunction (which it should have been).
> >>
> >> I also tried a few complex queries in the SQL console, and wrote a
> >> simple job using the State Processor API. Everything worked.
> >>
> >> David
> >>
> >>
> >> David Anderson | Training Coordinator
> >>
> >> Follow us @VervericaData
> >>
> >> --
> >> Join Flink Forward - The Apache Flink Conference
> >> Stream Processing | Event Driven | Real Time
> >>
> >>
> >> On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek 
> >> wrote:
> >>> +1
> >>>
> >>> I checked the last RC on a GCE cluster and was satisfied with the
> >> testing. The cherry-picked commits didn’t change anything related, so
> I’m
> >> forwarding my vote from there.
> >>> Aljoscha
> >>>
>  On 21. Aug 2019, at 13:34, Chesnay Schepler 
> >> wrote:
>  +1 (binding)
> 
>  On 21/08/2019 08:09, Bowen Li wrote:
> > +1 non-binding
> >
> > - built from source with default profile
> > - manually ran SQL and Table API tests for Flink's metadata
> >> integration
> > with Hive Metastore in local cluster
> > - manually ran SQL tests for batch capability with Blink planner and
> >> Hive
> > integration (source/sink/udf) in local cluster
> >  - file formats include: csv, orc, parquet
> >
> >
> > On Tue, Aug 20, 2019 at 10:23 PM Gary Yao 
> wrote:
> >
> >> +1 (non-binding)
> >>
> >> Reran Jepsen tests 10 times.
> >>
> >> On Wed, Aug 21, 2019 at 5:35 AM vino yang 
> >> wrote:
> >>> +1 (non-binding)
> >>>
> >>> - checkout source code and build successfully
> >>> - started a local cluster and ran some example jobs successfully
> >>> - verified signatures and hashes
> >>> - checked release notes and post
> >>>
> >>> Best,
> >>> Vino
> >>>
> >>> Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> >>>
>  +1 (binding)
> 
>    - Downloaded the binary release tarball
>    - started a standalone cluster with four nodes
>    - ran some examples through the Web UI
>    - checked the logs
>    - created a project from the Java quickstarts maven archetype
>    - ran a multi-stage DataSet job in batch mode
>    - killed as TaskManager and verified correct restart behavior,
> >> including
>  failover region backtracking
> 
> 
>  I found a few issues, and a common theme here is confusing error
> >>> reporting
>  and logging.
> 
>  (1) When testing batch failover and killing a TaskManager, the job
> >>> reports
>  as the failure cause "org.apache.flink.util.FlinkException: The
> >> assigned
>  slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
>   I think that is a pretty bad error message, as a user I don't
> >> know
> >>> what
>  that means. Some internal book keeping thing?
>   You need to know a lot about 

Re: [RESULT] [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-22 Thread Chesnay Schepler

Are we also releasing python artifacts for 1.9?

On 21/08/2019 19:23, Tzu-Li (Gordon) Tai wrote:

I'm happy to announce that we have unanimously approved this candidate as
the 1.9.0 release.

There are 12 approving votes, 5 of which are binding:
- Yu Li
- Zili Chen
- Gordon Tai
- Stephan Ewen
- Jark Wu
- Vino Yang
- Gary Yao
- Bowen Li
- Chesnay Schepler
- Till Rohrmann
- Aljoscha Krettek
- David Anderson

There are no disapproving votes.

Thanks everyone who has contributed to this release!

I will wait until tomorrow morning for the artifacts to be available in
Maven central before announcing the release in a separate thread.

The release blog post will also be merged tomorrow along with the official
announcement.

Cheers,
Gordon

On Wed, Aug 21, 2019, 5:37 PM David Anderson  wrote:


+1 (non-binding)

I upgraded the flink-training-exercises project.

I encountered a few rough edges, including problems in the docs, but
nothing serious.

I had to make some modifications to deal with changes in the Table API:

ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
TableEnvironment.getTableEnvironment became StreamTableEnvironment.create
StreamTableDescriptorValidator.UPDATE_MODE() became
StreamTableDescriptorValidator.UPDATE_MODE
org.apache.flink.table.api.java.Slide moved to
org.apache.flink.table.api.Slide

I also found myself forced to change a CoProcessFunction to a
KeyedCoProcessFunction (which it should have been).

I also tried a few complex queries in the SQL console, and wrote a
simple job using the State Processor API. Everything worked.

David


David Anderson | Training Coordinator

Follow us @VervericaData

--
Join Flink Forward - The Apache Flink Conference
Stream Processing | Event Driven | Real Time


On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek 
wrote:

+1

I checked the last RC on a GCE cluster and was satisfied with the

testing. The cherry-picked commits didn’t change anything related, so I’m
forwarding my vote from there.

Aljoscha


On 21. Aug 2019, at 13:34, Chesnay Schepler 

wrote:

+1 (binding)

On 21/08/2019 08:09, Bowen Li wrote:

+1 non-binding

- built from source with default profile
- manually ran SQL and Table API tests for Flink's metadata

integration

with Hive Metastore in local cluster
- manually ran SQL tests for batch capability with Blink planner and

Hive

integration (source/sink/udf) in local cluster
 - file formats include: csv, orc, parquet


On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:


+1 (non-binding)

Reran Jepsen tests 10 times.

On Wed, Aug 21, 2019 at 5:35 AM vino yang 

wrote:

+1 (non-binding)

- checkout source code and build successfully
- started a local cluster and ran some example jobs successfully
- verified signatures and hashes
- checked release notes and post

Best,
Vino

Stephan Ewen  于2019年8月21日周三 上午4:20写道:


+1 (binding)

  - Downloaded the binary release tarball
  - started a standalone cluster with four nodes
  - ran some examples through the Web UI
  - checked the logs
  - created a project from the Java quickstarts maven archetype
  - ran a multi-stage DataSet job in batch mode
  - killed as TaskManager and verified correct restart behavior,

including

failover region backtracking


I found a few issues, and a common theme here is confusing error

reporting

and logging.

(1) When testing batch failover and killing a TaskManager, the job

reports

as the failure cause "org.apache.flink.util.FlinkException: The

assigned

slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
 I think that is a pretty bad error message, as a user I don't

know

what

that means. Some internal book keeping thing?
 You need to know a lot about Flink to understand that this

means

"TaskManager failure".
 https://issues.apache.org/jira/browse/FLINK-13805
 I would not block the release on this, but think this should

get

pretty

urgent attention.

(2) The Metric Fetcher floods the log with error messages when a
TaskManager is lost.
  There are many exceptions being logged by the Metrics Fetcher

due

to

not reaching the TM any more.
  This pollutes the log and drowns out the original exception

and

the

meaningful logs from the scheduler/execution graph.
  https://issues.apache.org/jira/browse/FLINK-13806
  Again, I would not block the release on this, but think this

should

get pretty urgent attention.

(3) If you put "web.submit.enable: false" into the configuration,

the

web

UI will still display the "SubmitJob" page, but errors will
 continuously pop up, stating "Unable to load requested file

/jars."

 https://issues.apache.org/jira/browse/FLINK-13799

(4) REST endpoint logs ERROR level messages when selecting the
"Checkpoints" tab for batch jobs. That does not seem correct.
  https://issues.apache.org/jira/browse/FLINK-13795

Best,
Stephan




On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <

tzuli...@apache.org>

wrote:


+1

Legal checks:
- verified signatures and 

Re: [RESULT] [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread Shaoxuan Wang
Congratulations and thanks all for the great efforts on release1.9.

I have verified the RC#3 with the following items:

- Verified signatures and hashes. (OK)
- Built from source archive. (OK)
- Repository contains all artifacts. (OK)
- Test WordCount on local cluster. (OK)
a. Both streaming and batch
b. Web ui works fine
- Test WordCount on yarn cluster. (OK)
a. Both streaming and batch
b. Web ui works fine
c. Test session mode and non-session mode

So +1 (binding) from my side.

Regards,
Shaoxuan


On Thu, Aug 22, 2019 at 1:23 AM Tzu-Li (Gordon) Tai 
wrote:

> I'm happy to announce that we have unanimously approved this candidate as
> the 1.9.0 release.
>
> There are 12 approving votes, 5 of which are binding:
> - Yu Li
> - Zili Chen
> - Gordon Tai
> - Stephan Ewen
> - Jark Wu
> - Vino Yang
> - Gary Yao
> - Bowen Li
> - Chesnay Schepler
> - Till Rohrmann
> - Aljoscha Krettek
> - David Anderson
>
> There are no disapproving votes.
>
> Thanks everyone who has contributed to this release!
>
> I will wait until tomorrow morning for the artifacts to be available in
> Maven central before announcing the release in a separate thread.
>
> The release blog post will also be merged tomorrow along with the official
> announcement.
>
> Cheers,
> Gordon
>
> On Wed, Aug 21, 2019, 5:37 PM David Anderson  wrote:
>
> > +1 (non-binding)
> >
> > I upgraded the flink-training-exercises project.
> >
> > I encountered a few rough edges, including problems in the docs, but
> > nothing serious.
> >
> > I had to make some modifications to deal with changes in the Table API:
> >
> > ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
> > TableEnvironment.getTableEnvironment became StreamTableEnvironment.create
> > StreamTableDescriptorValidator.UPDATE_MODE() became
> > StreamTableDescriptorValidator.UPDATE_MODE
> > org.apache.flink.table.api.java.Slide moved to
> > org.apache.flink.table.api.Slide
> >
> > I also found myself forced to change a CoProcessFunction to a
> > KeyedCoProcessFunction (which it should have been).
> >
> > I also tried a few complex queries in the SQL console, and wrote a
> > simple job using the State Processor API. Everything worked.
> >
> > David
> >
> >
> > David Anderson | Training Coordinator
> >
> > Follow us @VervericaData
> >
> > --
> > Join Flink Forward - The Apache Flink Conference
> > Stream Processing | Event Driven | Real Time
> >
> >
> > On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek 
> > wrote:
> > >
> > > +1
> > >
> > > I checked the last RC on a GCE cluster and was satisfied with the
> > testing. The cherry-picked commits didn’t change anything related, so I’m
> > forwarding my vote from there.
> > >
> > > Aljoscha
> > >
> > > > On 21. Aug 2019, at 13:34, Chesnay Schepler 
> > wrote:
> > > >
> > > > +1 (binding)
> > > >
> > > > On 21/08/2019 08:09, Bowen Li wrote:
> > > >> +1 non-binding
> > > >>
> > > >> - built from source with default profile
> > > >> - manually ran SQL and Table API tests for Flink's metadata
> > integration
> > > >> with Hive Metastore in local cluster
> > > >> - manually ran SQL tests for batch capability with Blink planner and
> > Hive
> > > >> integration (source/sink/udf) in local cluster
> > > >> - file formats include: csv, orc, parquet
> > > >>
> > > >>
> > > >> On Tue, Aug 20, 2019 at 10:23 PM Gary Yao 
> wrote:
> > > >>
> > > >>> +1 (non-binding)
> > > >>>
> > > >>> Reran Jepsen tests 10 times.
> > > >>>
> > > >>> On Wed, Aug 21, 2019 at 5:35 AM vino yang 
> > wrote:
> > > >>>
> > >  +1 (non-binding)
> > > 
> > >  - checkout source code and build successfully
> > >  - started a local cluster and ran some example jobs successfully
> > >  - verified signatures and hashes
> > >  - checked release notes and post
> > > 
> > >  Best,
> > >  Vino
> > > 
> > >  Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> > > 
> > > > +1 (binding)
> > > >
> > > >  - Downloaded the binary release tarball
> > > >  - started a standalone cluster with four nodes
> > > >  - ran some examples through the Web UI
> > > >  - checked the logs
> > > >  - created a project from the Java quickstarts maven archetype
> > > >  - ran a multi-stage DataSet job in batch mode
> > > >  - killed as TaskManager and verified correct restart behavior,
> > > >>> including
> > > > failover region backtracking
> > > >
> > > >
> > > > I found a few issues, and a common theme here is confusing error
> > >  reporting
> > > > and logging.
> > > >
> > > > (1) When testing batch failover and killing a TaskManager, the
> job
> > >  reports
> > > > as the failure cause "org.apache.flink.util.FlinkException: The
> > > >>> assigned
> > > > slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> > > > I think that is a pretty bad error message, as a user I don't
> > know
> > >  what
> > > > that means. Some internal book keeping thing?
> > > 

[RESULT] [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread Tzu-Li (Gordon) Tai
I'm happy to announce that we have unanimously approved this candidate as
the 1.9.0 release.

There are 12 approving votes, 5 of which are binding:
- Yu Li
- Zili Chen
- Gordon Tai
- Stephan Ewen
- Jark Wu
- Vino Yang
- Gary Yao
- Bowen Li
- Chesnay Schepler
- Till Rohrmann
- Aljoscha Krettek
- David Anderson

There are no disapproving votes.

Thanks everyone who has contributed to this release!

I will wait until tomorrow morning for the artifacts to be available in
Maven central before announcing the release in a separate thread.

The release blog post will also be merged tomorrow along with the official
announcement.

Cheers,
Gordon

On Wed, Aug 21, 2019, 5:37 PM David Anderson  wrote:

> +1 (non-binding)
>
> I upgraded the flink-training-exercises project.
>
> I encountered a few rough edges, including problems in the docs, but
> nothing serious.
>
> I had to make some modifications to deal with changes in the Table API:
>
> ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
> TableEnvironment.getTableEnvironment became StreamTableEnvironment.create
> StreamTableDescriptorValidator.UPDATE_MODE() became
> StreamTableDescriptorValidator.UPDATE_MODE
> org.apache.flink.table.api.java.Slide moved to
> org.apache.flink.table.api.Slide
>
> I also found myself forced to change a CoProcessFunction to a
> KeyedCoProcessFunction (which it should have been).
>
> I also tried a few complex queries in the SQL console, and wrote a
> simple job using the State Processor API. Everything worked.
>
> David
>
>
> David Anderson | Training Coordinator
>
> Follow us @VervericaData
>
> --
> Join Flink Forward - The Apache Flink Conference
> Stream Processing | Event Driven | Real Time
>
>
> On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek 
> wrote:
> >
> > +1
> >
> > I checked the last RC on a GCE cluster and was satisfied with the
> testing. The cherry-picked commits didn’t change anything related, so I’m
> forwarding my vote from there.
> >
> > Aljoscha
> >
> > > On 21. Aug 2019, at 13:34, Chesnay Schepler 
> wrote:
> > >
> > > +1 (binding)
> > >
> > > On 21/08/2019 08:09, Bowen Li wrote:
> > >> +1 non-binding
> > >>
> > >> - built from source with default profile
> > >> - manually ran SQL and Table API tests for Flink's metadata
> integration
> > >> with Hive Metastore in local cluster
> > >> - manually ran SQL tests for batch capability with Blink planner and
> Hive
> > >> integration (source/sink/udf) in local cluster
> > >> - file formats include: csv, orc, parquet
> > >>
> > >>
> > >> On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:
> > >>
> > >>> +1 (non-binding)
> > >>>
> > >>> Reran Jepsen tests 10 times.
> > >>>
> > >>> On Wed, Aug 21, 2019 at 5:35 AM vino yang 
> wrote:
> > >>>
> >  +1 (non-binding)
> > 
> >  - checkout source code and build successfully
> >  - started a local cluster and ran some example jobs successfully
> >  - verified signatures and hashes
> >  - checked release notes and post
> > 
> >  Best,
> >  Vino
> > 
> >  Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> > 
> > > +1 (binding)
> > >
> > >  - Downloaded the binary release tarball
> > >  - started a standalone cluster with four nodes
> > >  - ran some examples through the Web UI
> > >  - checked the logs
> > >  - created a project from the Java quickstarts maven archetype
> > >  - ran a multi-stage DataSet job in batch mode
> > >  - killed as TaskManager and verified correct restart behavior,
> > >>> including
> > > failover region backtracking
> > >
> > >
> > > I found a few issues, and a common theme here is confusing error
> >  reporting
> > > and logging.
> > >
> > > (1) When testing batch failover and killing a TaskManager, the job
> >  reports
> > > as the failure cause "org.apache.flink.util.FlinkException: The
> > >>> assigned
> > > slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> > > I think that is a pretty bad error message, as a user I don't
> know
> >  what
> > > that means. Some internal book keeping thing?
> > > You need to know a lot about Flink to understand that this
> means
> > > "TaskManager failure".
> > > https://issues.apache.org/jira/browse/FLINK-13805
> > > I would not block the release on this, but think this should
> get
> >  pretty
> > > urgent attention.
> > >
> > > (2) The Metric Fetcher floods the log with error messages when a
> > > TaskManager is lost.
> > >  There are many exceptions being logged by the Metrics Fetcher
> due
> > >>> to
> > > not reaching the TM any more.
> > >  This pollutes the log and drowns out the original exception
> and
> > >>> the
> > > meaningful logs from the scheduler/execution graph.
> > >  https://issues.apache.org/jira/browse/FLINK-13806
> > >  Again, I would not block the release on this, but think this
> > >>> should
> > > 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread David Anderson
+1 (non-binding)

I upgraded the flink-training-exercises project.

I encountered a few rough edges, including problems in the docs, but
nothing serious.

I had to make some modifications to deal with changes in the Table API:

ExternalCatalogTable.builder became new ExternalCatalogTableBuilder
TableEnvironment.getTableEnvironment became StreamTableEnvironment.create
StreamTableDescriptorValidator.UPDATE_MODE() became
StreamTableDescriptorValidator.UPDATE_MODE
org.apache.flink.table.api.java.Slide moved to org.apache.flink.table.api.Slide

I also found myself forced to change a CoProcessFunction to a
KeyedCoProcessFunction (which it should have been).

I also tried a few complex queries in the SQL console, and wrote a
simple job using the State Processor API. Everything worked.

David


David Anderson | Training Coordinator

Follow us @VervericaData

--
Join Flink Forward - The Apache Flink Conference
Stream Processing | Event Driven | Real Time


On Wed, Aug 21, 2019 at 1:45 PM Aljoscha Krettek  wrote:
>
> +1
>
> I checked the last RC on a GCE cluster and was satisfied with the testing. 
> The cherry-picked commits didn’t change anything related, so I’m forwarding 
> my vote from there.
>
> Aljoscha
>
> > On 21. Aug 2019, at 13:34, Chesnay Schepler  wrote:
> >
> > +1 (binding)
> >
> > On 21/08/2019 08:09, Bowen Li wrote:
> >> +1 non-binding
> >>
> >> - built from source with default profile
> >> - manually ran SQL and Table API tests for Flink's metadata integration
> >> with Hive Metastore in local cluster
> >> - manually ran SQL tests for batch capability with Blink planner and Hive
> >> integration (source/sink/udf) in local cluster
> >> - file formats include: csv, orc, parquet
> >>
> >>
> >> On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:
> >>
> >>> +1 (non-binding)
> >>>
> >>> Reran Jepsen tests 10 times.
> >>>
> >>> On Wed, Aug 21, 2019 at 5:35 AM vino yang  wrote:
> >>>
>  +1 (non-binding)
> 
>  - checkout source code and build successfully
>  - started a local cluster and ran some example jobs successfully
>  - verified signatures and hashes
>  - checked release notes and post
> 
>  Best,
>  Vino
> 
>  Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> 
> > +1 (binding)
> >
> >  - Downloaded the binary release tarball
> >  - started a standalone cluster with four nodes
> >  - ran some examples through the Web UI
> >  - checked the logs
> >  - created a project from the Java quickstarts maven archetype
> >  - ran a multi-stage DataSet job in batch mode
> >  - killed as TaskManager and verified correct restart behavior,
> >>> including
> > failover region backtracking
> >
> >
> > I found a few issues, and a common theme here is confusing error
>  reporting
> > and logging.
> >
> > (1) When testing batch failover and killing a TaskManager, the job
>  reports
> > as the failure cause "org.apache.flink.util.FlinkException: The
> >>> assigned
> > slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> > I think that is a pretty bad error message, as a user I don't know
>  what
> > that means. Some internal book keeping thing?
> > You need to know a lot about Flink to understand that this means
> > "TaskManager failure".
> > https://issues.apache.org/jira/browse/FLINK-13805
> > I would not block the release on this, but think this should get
>  pretty
> > urgent attention.
> >
> > (2) The Metric Fetcher floods the log with error messages when a
> > TaskManager is lost.
> >  There are many exceptions being logged by the Metrics Fetcher due
> >>> to
> > not reaching the TM any more.
> >  This pollutes the log and drowns out the original exception and
> >>> the
> > meaningful logs from the scheduler/execution graph.
> >  https://issues.apache.org/jira/browse/FLINK-13806
> >  Again, I would not block the release on this, but think this
> >>> should
> > get pretty urgent attention.
> >
> > (3) If you put "web.submit.enable: false" into the configuration, the
> >>> web
> > UI will still display the "SubmitJob" page, but errors will
> > continuously pop up, stating "Unable to load requested file /jars."
> > https://issues.apache.org/jira/browse/FLINK-13799
> >
> > (4) REST endpoint logs ERROR level messages when selecting the
> > "Checkpoints" tab for batch jobs. That does not seem correct.
> >  https://issues.apache.org/jira/browse/FLINK-13795
> >
> > Best,
> > Stephan
> >
> >
> >
> >
> > On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <
>  tzuli...@apache.org>
> > wrote:
> >
> >> +1
> >>
> >> Legal checks:
> >> - verified signatures and hashes
> >> - New bundled Javascript dependencies for flink-runtime-web are
>  correctly
> >> reflected under 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread Till Rohrmann
+1 (binding)

- tested Flink's Java quickstarts
- tested Yarn deployment
- tested new web UI
- run quickstart job on Flink cluster
- reviewed release announcement and release notes

Cheers,
Till

On Wed, Aug 21, 2019 at 1:34 PM Chesnay Schepler  wrote:

> +1 (binding)
>
> On 21/08/2019 08:09, Bowen Li wrote:
> > +1 non-binding
> >
> > - built from source with default profile
> > - manually ran SQL and Table API tests for Flink's metadata integration
> > with Hive Metastore in local cluster
> > - manually ran SQL tests for batch capability with Blink planner and Hive
> > integration (source/sink/udf) in local cluster
> >  - file formats include: csv, orc, parquet
> >
> >
> > On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:
> >
> >> +1 (non-binding)
> >>
> >> Reran Jepsen tests 10 times.
> >>
> >> On Wed, Aug 21, 2019 at 5:35 AM vino yang 
> wrote:
> >>
> >>> +1 (non-binding)
> >>>
> >>> - checkout source code and build successfully
> >>> - started a local cluster and ran some example jobs successfully
> >>> - verified signatures and hashes
> >>> - checked release notes and post
> >>>
> >>> Best,
> >>> Vino
> >>>
> >>> Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> >>>
>  +1 (binding)
> 
>    - Downloaded the binary release tarball
>    - started a standalone cluster with four nodes
>    - ran some examples through the Web UI
>    - checked the logs
>    - created a project from the Java quickstarts maven archetype
>    - ran a multi-stage DataSet job in batch mode
>    - killed as TaskManager and verified correct restart behavior,
> >> including
>  failover region backtracking
> 
> 
>  I found a few issues, and a common theme here is confusing error
> >>> reporting
>  and logging.
> 
>  (1) When testing batch failover and killing a TaskManager, the job
> >>> reports
>  as the failure cause "org.apache.flink.util.FlinkException: The
> >> assigned
>  slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
>   I think that is a pretty bad error message, as a user I don't
> know
> >>> what
>  that means. Some internal book keeping thing?
>   You need to know a lot about Flink to understand that this means
>  "TaskManager failure".
>   https://issues.apache.org/jira/browse/FLINK-13805
>   I would not block the release on this, but think this should get
> >>> pretty
>  urgent attention.
> 
>  (2) The Metric Fetcher floods the log with error messages when a
>  TaskManager is lost.
>    There are many exceptions being logged by the Metrics Fetcher
> due
> >> to
>  not reaching the TM any more.
>    This pollutes the log and drowns out the original exception and
> >> the
>  meaningful logs from the scheduler/execution graph.
>    https://issues.apache.org/jira/browse/FLINK-13806
>    Again, I would not block the release on this, but think this
> >> should
>  get pretty urgent attention.
> 
>  (3) If you put "web.submit.enable: false" into the configuration, the
> >> web
>  UI will still display the "SubmitJob" page, but errors will
>   continuously pop up, stating "Unable to load requested file
> /jars."
>   https://issues.apache.org/jira/browse/FLINK-13799
> 
>  (4) REST endpoint logs ERROR level messages when selecting the
>  "Checkpoints" tab for batch jobs. That does not seem correct.
>    https://issues.apache.org/jira/browse/FLINK-13795
> 
>  Best,
>  Stephan
> 
> 
> 
> 
>  On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <
> >>> tzuli...@apache.org>
>  wrote:
> 
> > +1
> >
> > Legal checks:
> > - verified signatures and hashes
> > - New bundled Javascript dependencies for flink-runtime-web are
> >>> correctly
> > reflected under licenses-binary and NOTICE file.
> > - locally built from source (Scala 2.12, without Hadoop)
> > - No missing artifacts in staging repo
> > - No binaries in source release
> >
> > Functional checks:
> > - Quickstart working (both in IDE + job submission)
> > - Simple State Processor API program that performs offline key schema
> > migration (RocksDB backend). Generated savepoint is valid to restore
>  from.
> > - All E2E tests pass locally
> > - Didn’t notice any issues with the new WebUI
> >
> > Cheers,
> > Gordon
> >
> > On Tue, Aug 20, 2019 at 3:53 AM Zili Chen 
> >>> wrote:
> >> +1 (non-binding)
> >>
> >> - build from source: OK(8u212)
> >> - check local setup tutorial works as expected
> >>
> >> Best,
> >> tison.
> >>
> >>
> >> Yu Li  于2019年8月20日周二 上午8:24写道:
> >>
> >>> +1 (non-binding)
> >>>
> >>> - checked release notes: OK
> >>> - checked sums and signatures: OK
> >>> - repository appears to contain all expected artifacts
> >>> - source release
> >>>   - 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread Aljoscha Krettek
+1

I checked the last RC on a GCE cluster and was satisfied with the testing. The 
cherry-picked commits didn’t change anything related, so I’m forwarding my vote 
from there.

Aljoscha

> On 21. Aug 2019, at 13:34, Chesnay Schepler  wrote:
> 
> +1 (binding)
> 
> On 21/08/2019 08:09, Bowen Li wrote:
>> +1 non-binding
>> 
>> - built from source with default profile
>> - manually ran SQL and Table API tests for Flink's metadata integration
>> with Hive Metastore in local cluster
>> - manually ran SQL tests for batch capability with Blink planner and Hive
>> integration (source/sink/udf) in local cluster
>> - file formats include: csv, orc, parquet
>> 
>> 
>> On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> Reran Jepsen tests 10 times.
>>> 
>>> On Wed, Aug 21, 2019 at 5:35 AM vino yang  wrote:
>>> 
 +1 (non-binding)
 
 - checkout source code and build successfully
 - started a local cluster and ran some example jobs successfully
 - verified signatures and hashes
 - checked release notes and post
 
 Best,
 Vino
 
 Stephan Ewen  于2019年8月21日周三 上午4:20写道:
 
> +1 (binding)
> 
>  - Downloaded the binary release tarball
>  - started a standalone cluster with four nodes
>  - ran some examples through the Web UI
>  - checked the logs
>  - created a project from the Java quickstarts maven archetype
>  - ran a multi-stage DataSet job in batch mode
>  - killed as TaskManager and verified correct restart behavior,
>>> including
> failover region backtracking
> 
> 
> I found a few issues, and a common theme here is confusing error
 reporting
> and logging.
> 
> (1) When testing batch failover and killing a TaskManager, the job
 reports
> as the failure cause "org.apache.flink.util.FlinkException: The
>>> assigned
> slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> I think that is a pretty bad error message, as a user I don't know
 what
> that means. Some internal book keeping thing?
> You need to know a lot about Flink to understand that this means
> "TaskManager failure".
> https://issues.apache.org/jira/browse/FLINK-13805
> I would not block the release on this, but think this should get
 pretty
> urgent attention.
> 
> (2) The Metric Fetcher floods the log with error messages when a
> TaskManager is lost.
>  There are many exceptions being logged by the Metrics Fetcher due
>>> to
> not reaching the TM any more.
>  This pollutes the log and drowns out the original exception and
>>> the
> meaningful logs from the scheduler/execution graph.
>  https://issues.apache.org/jira/browse/FLINK-13806
>  Again, I would not block the release on this, but think this
>>> should
> get pretty urgent attention.
> 
> (3) If you put "web.submit.enable: false" into the configuration, the
>>> web
> UI will still display the "SubmitJob" page, but errors will
> continuously pop up, stating "Unable to load requested file /jars."
> https://issues.apache.org/jira/browse/FLINK-13799
> 
> (4) REST endpoint logs ERROR level messages when selecting the
> "Checkpoints" tab for batch jobs. That does not seem correct.
>  https://issues.apache.org/jira/browse/FLINK-13795
> 
> Best,
> Stephan
> 
> 
> 
> 
> On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <
 tzuli...@apache.org>
> wrote:
> 
>> +1
>> 
>> Legal checks:
>> - verified signatures and hashes
>> - New bundled Javascript dependencies for flink-runtime-web are
 correctly
>> reflected under licenses-binary and NOTICE file.
>> - locally built from source (Scala 2.12, without Hadoop)
>> - No missing artifacts in staging repo
>> - No binaries in source release
>> 
>> Functional checks:
>> - Quickstart working (both in IDE + job submission)
>> - Simple State Processor API program that performs offline key schema
>> migration (RocksDB backend). Generated savepoint is valid to restore
> from.
>> - All E2E tests pass locally
>> - Didn’t notice any issues with the new WebUI
>> 
>> Cheers,
>> Gordon
>> 
>> On Tue, Aug 20, 2019 at 3:53 AM Zili Chen 
 wrote:
>>> +1 (non-binding)
>>> 
>>> - build from source: OK(8u212)
>>> - check local setup tutorial works as expected
>>> 
>>> Best,
>>> tison.
>>> 
>>> 
>>> Yu Li  于2019年8月20日周二 上午8:24写道:
>>> 
 +1 (non-binding)
 
 - checked release notes: OK
 - checked sums and signatures: OK
 - repository appears to contain all expected artifacts
 - source release
  - contains no binaries: OK
  - contains no 1.9-SNAPSHOT references: OK
  - build from source: OK (8u102)
 - binary 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread Chesnay Schepler

+1 (binding)

On 21/08/2019 08:09, Bowen Li wrote:

+1 non-binding

- built from source with default profile
- manually ran SQL and Table API tests for Flink's metadata integration
with Hive Metastore in local cluster
- manually ran SQL tests for batch capability with Blink planner and Hive
integration (source/sink/udf) in local cluster
 - file formats include: csv, orc, parquet


On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:


+1 (non-binding)

Reran Jepsen tests 10 times.

On Wed, Aug 21, 2019 at 5:35 AM vino yang  wrote:


+1 (non-binding)

- checkout source code and build successfully
- started a local cluster and ran some example jobs successfully
- verified signatures and hashes
- checked release notes and post

Best,
Vino

Stephan Ewen  于2019年8月21日周三 上午4:20写道:


+1 (binding)

  - Downloaded the binary release tarball
  - started a standalone cluster with four nodes
  - ran some examples through the Web UI
  - checked the logs
  - created a project from the Java quickstarts maven archetype
  - ran a multi-stage DataSet job in batch mode
  - killed as TaskManager and verified correct restart behavior,

including

failover region backtracking


I found a few issues, and a common theme here is confusing error

reporting

and logging.

(1) When testing batch failover and killing a TaskManager, the job

reports

as the failure cause "org.apache.flink.util.FlinkException: The

assigned

slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
 I think that is a pretty bad error message, as a user I don't know

what

that means. Some internal book keeping thing?
 You need to know a lot about Flink to understand that this means
"TaskManager failure".
 https://issues.apache.org/jira/browse/FLINK-13805
 I would not block the release on this, but think this should get

pretty

urgent attention.

(2) The Metric Fetcher floods the log with error messages when a
TaskManager is lost.
  There are many exceptions being logged by the Metrics Fetcher due

to

not reaching the TM any more.
  This pollutes the log and drowns out the original exception and

the

meaningful logs from the scheduler/execution graph.
  https://issues.apache.org/jira/browse/FLINK-13806
  Again, I would not block the release on this, but think this

should

get pretty urgent attention.

(3) If you put "web.submit.enable: false" into the configuration, the

web

UI will still display the "SubmitJob" page, but errors will
 continuously pop up, stating "Unable to load requested file /jars."
 https://issues.apache.org/jira/browse/FLINK-13799

(4) REST endpoint logs ERROR level messages when selecting the
"Checkpoints" tab for batch jobs. That does not seem correct.
  https://issues.apache.org/jira/browse/FLINK-13795

Best,
Stephan




On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <

tzuli...@apache.org>

wrote:


+1

Legal checks:
- verified signatures and hashes
- New bundled Javascript dependencies for flink-runtime-web are

correctly

reflected under licenses-binary and NOTICE file.
- locally built from source (Scala 2.12, without Hadoop)
- No missing artifacts in staging repo
- No binaries in source release

Functional checks:
- Quickstart working (both in IDE + job submission)
- Simple State Processor API program that performs offline key schema
migration (RocksDB backend). Generated savepoint is valid to restore

from.

- All E2E tests pass locally
- Didn’t notice any issues with the new WebUI

Cheers,
Gordon

On Tue, Aug 20, 2019 at 3:53 AM Zili Chen 

wrote:

+1 (non-binding)

- build from source: OK(8u212)
- check local setup tutorial works as expected

Best,
tison.


Yu Li  于2019年8月20日周二 上午8:24写道:


+1 (non-binding)

- checked release notes: OK
- checked sums and signatures: OK
- repository appears to contain all expected artifacts
- source release
  - contains no binaries: OK
  - contains no 1.9-SNAPSHOT references: OK
  - build from source: OK (8u102)
- binary release
  - no examples appear to be missing
  - started a cluster; WebUI reachable, example ran

successfully

- checked README.md file and found nothing unexpected

Best Regards,
Yu


On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai <

tzuli...@apache.org

wrote:


Hi all,

Release candidate #3 for Apache Flink 1.9.0 is now ready for

your

review.

Please review and vote on release candidate #3 for version

1.9.0,

as

follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)

The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to

be

deployed to dist.apache.org [2], which are signed with the key

with

fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
* all artifacts to be deployed to the Maven Central Repository

[4],

* source code tag “release-1.9.0-rc3” [5].
* pull requests for the release note documentation [6] and


Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-21 Thread Bowen Li
+1 non-binding

- built from source with default profile
- manually ran SQL and Table API tests for Flink's metadata integration
with Hive Metastore in local cluster
- manually ran SQL tests for batch capability with Blink planner and Hive
integration (source/sink/udf) in local cluster
- file formats include: csv, orc, parquet


On Tue, Aug 20, 2019 at 10:23 PM Gary Yao  wrote:

> +1 (non-binding)
>
> Reran Jepsen tests 10 times.
>
> On Wed, Aug 21, 2019 at 5:35 AM vino yang  wrote:
>
> > +1 (non-binding)
> >
> > - checkout source code and build successfully
> > - started a local cluster and ran some example jobs successfully
> > - verified signatures and hashes
> > - checked release notes and post
> >
> > Best,
> > Vino
> >
> > Stephan Ewen  于2019年8月21日周三 上午4:20写道:
> >
> > > +1 (binding)
> > >
> > >  - Downloaded the binary release tarball
> > >  - started a standalone cluster with four nodes
> > >  - ran some examples through the Web UI
> > >  - checked the logs
> > >  - created a project from the Java quickstarts maven archetype
> > >  - ran a multi-stage DataSet job in batch mode
> > >  - killed as TaskManager and verified correct restart behavior,
> including
> > > failover region backtracking
> > >
> > >
> > > I found a few issues, and a common theme here is confusing error
> > reporting
> > > and logging.
> > >
> > > (1) When testing batch failover and killing a TaskManager, the job
> > reports
> > > as the failure cause "org.apache.flink.util.FlinkException: The
> assigned
> > > slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> > > I think that is a pretty bad error message, as a user I don't know
> > what
> > > that means. Some internal book keeping thing?
> > > You need to know a lot about Flink to understand that this means
> > > "TaskManager failure".
> > > https://issues.apache.org/jira/browse/FLINK-13805
> > > I would not block the release on this, but think this should get
> > pretty
> > > urgent attention.
> > >
> > > (2) The Metric Fetcher floods the log with error messages when a
> > > TaskManager is lost.
> > >  There are many exceptions being logged by the Metrics Fetcher due
> to
> > > not reaching the TM any more.
> > >  This pollutes the log and drowns out the original exception and
> the
> > > meaningful logs from the scheduler/execution graph.
> > >  https://issues.apache.org/jira/browse/FLINK-13806
> > >  Again, I would not block the release on this, but think this
> should
> > > get pretty urgent attention.
> > >
> > > (3) If you put "web.submit.enable: false" into the configuration, the
> web
> > > UI will still display the "SubmitJob" page, but errors will
> > > continuously pop up, stating "Unable to load requested file /jars."
> > > https://issues.apache.org/jira/browse/FLINK-13799
> > >
> > > (4) REST endpoint logs ERROR level messages when selecting the
> > > "Checkpoints" tab for batch jobs. That does not seem correct.
> > >  https://issues.apache.org/jira/browse/FLINK-13795
> > >
> > > Best,
> > > Stephan
> > >
> > >
> > >
> > >
> > > On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org>
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Legal checks:
> > > > - verified signatures and hashes
> > > > - New bundled Javascript dependencies for flink-runtime-web are
> > correctly
> > > > reflected under licenses-binary and NOTICE file.
> > > > - locally built from source (Scala 2.12, without Hadoop)
> > > > - No missing artifacts in staging repo
> > > > - No binaries in source release
> > > >
> > > > Functional checks:
> > > > - Quickstart working (both in IDE + job submission)
> > > > - Simple State Processor API program that performs offline key schema
> > > > migration (RocksDB backend). Generated savepoint is valid to restore
> > > from.
> > > > - All E2E tests pass locally
> > > > - Didn’t notice any issues with the new WebUI
> > > >
> > > > Cheers,
> > > > Gordon
> > > >
> > > > On Tue, Aug 20, 2019 at 3:53 AM Zili Chen 
> > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - build from source: OK(8u212)
> > > > > - check local setup tutorial works as expected
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > >
> > > > > Yu Li  于2019年8月20日周二 上午8:24写道:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > - checked release notes: OK
> > > > > > - checked sums and signatures: OK
> > > > > > - repository appears to contain all expected artifacts
> > > > > > - source release
> > > > > >  - contains no binaries: OK
> > > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > > >  - build from source: OK (8u102)
> > > > > > - binary release
> > > > > >  - no examples appear to be missing
> > > > > >  - started a cluster; WebUI reachable, example ran
> successfully
> > > > > > - checked README.md file and found nothing unexpected
> > > > > >
> > > > > > Best Regards,
> > > > > > Yu
> > > > > >
> > > > > >
> > > > > > On Tue, 20 Aug 2019 at 01:16, 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-20 Thread Gary Yao
+1 (non-binding)

Reran Jepsen tests 10 times.

On Wed, Aug 21, 2019 at 5:35 AM vino yang  wrote:

> +1 (non-binding)
>
> - checkout source code and build successfully
> - started a local cluster and ran some example jobs successfully
> - verified signatures and hashes
> - checked release notes and post
>
> Best,
> Vino
>
> Stephan Ewen  于2019年8月21日周三 上午4:20写道:
>
> > +1 (binding)
> >
> >  - Downloaded the binary release tarball
> >  - started a standalone cluster with four nodes
> >  - ran some examples through the Web UI
> >  - checked the logs
> >  - created a project from the Java quickstarts maven archetype
> >  - ran a multi-stage DataSet job in batch mode
> >  - killed as TaskManager and verified correct restart behavior, including
> > failover region backtracking
> >
> >
> > I found a few issues, and a common theme here is confusing error
> reporting
> > and logging.
> >
> > (1) When testing batch failover and killing a TaskManager, the job
> reports
> > as the failure cause "org.apache.flink.util.FlinkException: The assigned
> > slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> > I think that is a pretty bad error message, as a user I don't know
> what
> > that means. Some internal book keeping thing?
> > You need to know a lot about Flink to understand that this means
> > "TaskManager failure".
> > https://issues.apache.org/jira/browse/FLINK-13805
> > I would not block the release on this, but think this should get
> pretty
> > urgent attention.
> >
> > (2) The Metric Fetcher floods the log with error messages when a
> > TaskManager is lost.
> >  There are many exceptions being logged by the Metrics Fetcher due to
> > not reaching the TM any more.
> >  This pollutes the log and drowns out the original exception and the
> > meaningful logs from the scheduler/execution graph.
> >  https://issues.apache.org/jira/browse/FLINK-13806
> >  Again, I would not block the release on this, but think this should
> > get pretty urgent attention.
> >
> > (3) If you put "web.submit.enable: false" into the configuration, the web
> > UI will still display the "SubmitJob" page, but errors will
> > continuously pop up, stating "Unable to load requested file /jars."
> > https://issues.apache.org/jira/browse/FLINK-13799
> >
> > (4) REST endpoint logs ERROR level messages when selecting the
> > "Checkpoints" tab for batch jobs. That does not seem correct.
> >  https://issues.apache.org/jira/browse/FLINK-13795
> >
> > Best,
> > Stephan
> >
> >
> >
> >
> > On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai <
> tzuli...@apache.org>
> > wrote:
> >
> > > +1
> > >
> > > Legal checks:
> > > - verified signatures and hashes
> > > - New bundled Javascript dependencies for flink-runtime-web are
> correctly
> > > reflected under licenses-binary and NOTICE file.
> > > - locally built from source (Scala 2.12, without Hadoop)
> > > - No missing artifacts in staging repo
> > > - No binaries in source release
> > >
> > > Functional checks:
> > > - Quickstart working (both in IDE + job submission)
> > > - Simple State Processor API program that performs offline key schema
> > > migration (RocksDB backend). Generated savepoint is valid to restore
> > from.
> > > - All E2E tests pass locally
> > > - Didn’t notice any issues with the new WebUI
> > >
> > > Cheers,
> > > Gordon
> > >
> > > On Tue, Aug 20, 2019 at 3:53 AM Zili Chen 
> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - build from source: OK(8u212)
> > > > - check local setup tutorial works as expected
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Yu Li  于2019年8月20日周二 上午8:24写道:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - checked release notes: OK
> > > > > - checked sums and signatures: OK
> > > > > - repository appears to contain all expected artifacts
> > > > > - source release
> > > > >  - contains no binaries: OK
> > > > >  - contains no 1.9-SNAPSHOT references: OK
> > > > >  - build from source: OK (8u102)
> > > > > - binary release
> > > > >  - no examples appear to be missing
> > > > >  - started a cluster; WebUI reachable, example ran successfully
> > > > > - checked README.md file and found nothing unexpected
> > > > >
> > > > > Best Regards,
> > > > > Yu
> > > > >
> > > > >
> > > > > On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai <
> > tzuli...@apache.org
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Release candidate #3 for Apache Flink 1.9.0 is now ready for your
> > > > review.
> > > > > >
> > > > > > Please review and vote on release candidate #3 for version 1.9.0,
> > as
> > > > > > follows:
> > > > > > [ ] +1, Approve the release
> > > > > > [ ] -1, Do not approve the release (please provide specific
> > comments)
> > > > > >
> > > > > > The complete staging area is available for your review, which
> > > includes:
> > > > > > * JIRA release notes [1],
> > > > > > * the official Apache source release and binary convenience

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-20 Thread vino yang
+1 (non-binding)

- checkout source code and build successfully
- started a local cluster and ran some example jobs successfully
- verified signatures and hashes
- checked release notes and post

Best,
Vino

Stephan Ewen  于2019年8月21日周三 上午4:20写道:

> +1 (binding)
>
>  - Downloaded the binary release tarball
>  - started a standalone cluster with four nodes
>  - ran some examples through the Web UI
>  - checked the logs
>  - created a project from the Java quickstarts maven archetype
>  - ran a multi-stage DataSet job in batch mode
>  - killed as TaskManager and verified correct restart behavior, including
> failover region backtracking
>
>
> I found a few issues, and a common theme here is confusing error reporting
> and logging.
>
> (1) When testing batch failover and killing a TaskManager, the job reports
> as the failure cause "org.apache.flink.util.FlinkException: The assigned
> slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> I think that is a pretty bad error message, as a user I don't know what
> that means. Some internal book keeping thing?
> You need to know a lot about Flink to understand that this means
> "TaskManager failure".
> https://issues.apache.org/jira/browse/FLINK-13805
> I would not block the release on this, but think this should get pretty
> urgent attention.
>
> (2) The Metric Fetcher floods the log with error messages when a
> TaskManager is lost.
>  There are many exceptions being logged by the Metrics Fetcher due to
> not reaching the TM any more.
>  This pollutes the log and drowns out the original exception and the
> meaningful logs from the scheduler/execution graph.
>  https://issues.apache.org/jira/browse/FLINK-13806
>  Again, I would not block the release on this, but think this should
> get pretty urgent attention.
>
> (3) If you put "web.submit.enable: false" into the configuration, the web
> UI will still display the "SubmitJob" page, but errors will
> continuously pop up, stating "Unable to load requested file /jars."
> https://issues.apache.org/jira/browse/FLINK-13799
>
> (4) REST endpoint logs ERROR level messages when selecting the
> "Checkpoints" tab for batch jobs. That does not seem correct.
>  https://issues.apache.org/jira/browse/FLINK-13795
>
> Best,
> Stephan
>
>
>
>
> On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai 
> wrote:
>
> > +1
> >
> > Legal checks:
> > - verified signatures and hashes
> > - New bundled Javascript dependencies for flink-runtime-web are correctly
> > reflected under licenses-binary and NOTICE file.
> > - locally built from source (Scala 2.12, without Hadoop)
> > - No missing artifacts in staging repo
> > - No binaries in source release
> >
> > Functional checks:
> > - Quickstart working (both in IDE + job submission)
> > - Simple State Processor API program that performs offline key schema
> > migration (RocksDB backend). Generated savepoint is valid to restore
> from.
> > - All E2E tests pass locally
> > - Didn’t notice any issues with the new WebUI
> >
> > Cheers,
> > Gordon
> >
> > On Tue, Aug 20, 2019 at 3:53 AM Zili Chen  wrote:
> >
> > > +1 (non-binding)
> > >
> > > - build from source: OK(8u212)
> > > - check local setup tutorial works as expected
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Yu Li  于2019年8月20日周二 上午8:24写道:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - checked release notes: OK
> > > > - checked sums and signatures: OK
> > > > - repository appears to contain all expected artifacts
> > > > - source release
> > > >  - contains no binaries: OK
> > > >  - contains no 1.9-SNAPSHOT references: OK
> > > >  - build from source: OK (8u102)
> > > > - binary release
> > > >  - no examples appear to be missing
> > > >  - started a cluster; WebUI reachable, example ran successfully
> > > > - checked README.md file and found nothing unexpected
> > > >
> > > > Best Regards,
> > > > Yu
> > > >
> > > >
> > > > On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Release candidate #3 for Apache Flink 1.9.0 is now ready for your
> > > review.
> > > > >
> > > > > Please review and vote on release candidate #3 for version 1.9.0,
> as
> > > > > follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > > The complete staging area is available for your review, which
> > includes:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release and binary convenience
> releases
> > to
> > > > be
> > > > > deployed to dist.apache.org [2], which are signed with the key
> with
> > > > > fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag “release-1.9.0-rc3” [5].
> > > > > * pull requests for the release note documentation [6] and
> > announcement
> > > > > blog 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-20 Thread Jark Wu
+1 (non-binding)

- build the source release with Scala 2.12 and Scala 2.11 successfully
- checked/verified signatures and hashes
- checked that all POM files point to the same version
- started a cluster, ran a SQL query to temporal join with kafka source and
mysql jdbc table, and write results to kafka again.
  Using DDL (with timestamp type) to create the source and sinks. looks
good. No error in the logs.
- started a cluster, ran a SQL query to read from kafka source and apply a
group aggregation, and write into mysql jdbc table.
  Using DDL (with timestamp type) to create source and sink. looks good
too. No error in the logs.

Cheers,
Jark

On Wed, 21 Aug 2019 at 04:20, Stephan Ewen  wrote:

> +1 (binding)
>
>  - Downloaded the binary release tarball
>  - started a standalone cluster with four nodes
>  - ran some examples through the Web UI
>  - checked the logs
>  - created a project from the Java quickstarts maven archetype
>  - ran a multi-stage DataSet job in batch mode
>  - killed as TaskManager and verified correct restart behavior, including
> failover region backtracking
>
>
> I found a few issues, and a common theme here is confusing error reporting
> and logging.
>
> (1) When testing batch failover and killing a TaskManager, the job reports
> as the failure cause "org.apache.flink.util.FlinkException: The assigned
> slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
> I think that is a pretty bad error message, as a user I don't know what
> that means. Some internal book keeping thing?
> You need to know a lot about Flink to understand that this means
> "TaskManager failure".
> https://issues.apache.org/jira/browse/FLINK-13805
> I would not block the release on this, but think this should get pretty
> urgent attention.
>
> (2) The Metric Fetcher floods the log with error messages when a
> TaskManager is lost.
>  There are many exceptions being logged by the Metrics Fetcher due to
> not reaching the TM any more.
>  This pollutes the log and drowns out the original exception and the
> meaningful logs from the scheduler/execution graph.
>  https://issues.apache.org/jira/browse/FLINK-13806
>  Again, I would not block the release on this, but think this should
> get pretty urgent attention.
>
> (3) If you put "web.submit.enable: false" into the configuration, the web
> UI will still display the "SubmitJob" page, but errors will
> continuously pop up, stating "Unable to load requested file /jars."
> https://issues.apache.org/jira/browse/FLINK-13799
>
> (4) REST endpoint logs ERROR level messages when selecting the
> "Checkpoints" tab for batch jobs. That does not seem correct.
>  https://issues.apache.org/jira/browse/FLINK-13795
>
> Best,
> Stephan
>
>
>
>
> On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai 
> wrote:
>
> > +1
> >
> > Legal checks:
> > - verified signatures and hashes
> > - New bundled Javascript dependencies for flink-runtime-web are correctly
> > reflected under licenses-binary and NOTICE file.
> > - locally built from source (Scala 2.12, without Hadoop)
> > - No missing artifacts in staging repo
> > - No binaries in source release
> >
> > Functional checks:
> > - Quickstart working (both in IDE + job submission)
> > - Simple State Processor API program that performs offline key schema
> > migration (RocksDB backend). Generated savepoint is valid to restore
> from.
> > - All E2E tests pass locally
> > - Didn’t notice any issues with the new WebUI
> >
> > Cheers,
> > Gordon
> >
> > On Tue, Aug 20, 2019 at 3:53 AM Zili Chen  wrote:
> >
> > > +1 (non-binding)
> > >
> > > - build from source: OK(8u212)
> > > - check local setup tutorial works as expected
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Yu Li  于2019年8月20日周二 上午8:24写道:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - checked release notes: OK
> > > > - checked sums and signatures: OK
> > > > - repository appears to contain all expected artifacts
> > > > - source release
> > > >  - contains no binaries: OK
> > > >  - contains no 1.9-SNAPSHOT references: OK
> > > >  - build from source: OK (8u102)
> > > > - binary release
> > > >  - no examples appear to be missing
> > > >  - started a cluster; WebUI reachable, example ran successfully
> > > > - checked README.md file and found nothing unexpected
> > > >
> > > > Best Regards,
> > > > Yu
> > > >
> > > >
> > > > On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai <
> tzuli...@apache.org
> > >
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Release candidate #3 for Apache Flink 1.9.0 is now ready for your
> > > review.
> > > > >
> > > > > Please review and vote on release candidate #3 for version 1.9.0,
> as
> > > > > follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > > The complete staging area is available for your review, which
> > includes:
> > > > > * JIRA release notes [1],
> > > > > * the 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-20 Thread Stephan Ewen
+1 (binding)

 - Downloaded the binary release tarball
 - started a standalone cluster with four nodes
 - ran some examples through the Web UI
 - checked the logs
 - created a project from the Java quickstarts maven archetype
 - ran a multi-stage DataSet job in batch mode
 - killed as TaskManager and verified correct restart behavior, including
failover region backtracking


I found a few issues, and a common theme here is confusing error reporting
and logging.

(1) When testing batch failover and killing a TaskManager, the job reports
as the failure cause "org.apache.flink.util.FlinkException: The assigned
slot 6d0e469d55a2630871f43ad0f89c786c_0 was removed."
I think that is a pretty bad error message, as a user I don't know what
that means. Some internal book keeping thing?
You need to know a lot about Flink to understand that this means
"TaskManager failure".
https://issues.apache.org/jira/browse/FLINK-13805
I would not block the release on this, but think this should get pretty
urgent attention.

(2) The Metric Fetcher floods the log with error messages when a
TaskManager is lost.
 There are many exceptions being logged by the Metrics Fetcher due to
not reaching the TM any more.
 This pollutes the log and drowns out the original exception and the
meaningful logs from the scheduler/execution graph.
 https://issues.apache.org/jira/browse/FLINK-13806
 Again, I would not block the release on this, but think this should
get pretty urgent attention.

(3) If you put "web.submit.enable: false" into the configuration, the web
UI will still display the "SubmitJob" page, but errors will
continuously pop up, stating "Unable to load requested file /jars."
https://issues.apache.org/jira/browse/FLINK-13799

(4) REST endpoint logs ERROR level messages when selecting the
"Checkpoints" tab for batch jobs. That does not seem correct.
 https://issues.apache.org/jira/browse/FLINK-13795

Best,
Stephan




On Tue, Aug 20, 2019 at 11:32 AM Tzu-Li (Gordon) Tai 
wrote:

> +1
>
> Legal checks:
> - verified signatures and hashes
> - New bundled Javascript dependencies for flink-runtime-web are correctly
> reflected under licenses-binary and NOTICE file.
> - locally built from source (Scala 2.12, without Hadoop)
> - No missing artifacts in staging repo
> - No binaries in source release
>
> Functional checks:
> - Quickstart working (both in IDE + job submission)
> - Simple State Processor API program that performs offline key schema
> migration (RocksDB backend). Generated savepoint is valid to restore from.
> - All E2E tests pass locally
> - Didn’t notice any issues with the new WebUI
>
> Cheers,
> Gordon
>
> On Tue, Aug 20, 2019 at 3:53 AM Zili Chen  wrote:
>
> > +1 (non-binding)
> >
> > - build from source: OK(8u212)
> > - check local setup tutorial works as expected
> >
> > Best,
> > tison.
> >
> >
> > Yu Li  于2019年8月20日周二 上午8:24写道:
> >
> > > +1 (non-binding)
> > >
> > > - checked release notes: OK
> > > - checked sums and signatures: OK
> > > - repository appears to contain all expected artifacts
> > > - source release
> > >  - contains no binaries: OK
> > >  - contains no 1.9-SNAPSHOT references: OK
> > >  - build from source: OK (8u102)
> > > - binary release
> > >  - no examples appear to be missing
> > >  - started a cluster; WebUI reachable, example ran successfully
> > > - checked README.md file and found nothing unexpected
> > >
> > > Best Regards,
> > > Yu
> > >
> > >
> > > On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai  >
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > Release candidate #3 for Apache Flink 1.9.0 is now ready for your
> > review.
> > > >
> > > > Please review and vote on release candidate #3 for version 1.9.0, as
> > > > follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release and binary convenience releases
> to
> > > be
> > > > deployed to dist.apache.org [2], which are signed with the key with
> > > > fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag “release-1.9.0-rc3” [5].
> > > > * pull requests for the release note documentation [6] and
> announcement
> > > > blog post [7].
> > > >
> > > > As proposed in the RC2 vote thread [8], for RC3 we are only
> > > cherry-picking
> > > > minimal specific changes on top of RC2 to be able to reasonably carry
> > > over
> > > > previous testing efforts and effectively require a shorter voting
> time.
> > > >
> > > > The only extra commits in this RC, compared to RC2, are the
> following:
> > > > - c2d9aeac [FLINK-13231] [pubsub] Replace Max outstanding
> > acknowledgement
> > > > ids limit with a FlinkConnectorRateLimiter
> > > > - d8941711 

Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-20 Thread Tzu-Li (Gordon) Tai
+1

Legal checks:
- verified signatures and hashes
- New bundled Javascript dependencies for flink-runtime-web are correctly
reflected under licenses-binary and NOTICE file.
- locally built from source (Scala 2.12, without Hadoop)
- No missing artifacts in staging repo
- No binaries in source release

Functional checks:
- Quickstart working (both in IDE + job submission)
- Simple State Processor API program that performs offline key schema
migration (RocksDB backend). Generated savepoint is valid to restore from.
- All E2E tests pass locally
- Didn’t notice any issues with the new WebUI

Cheers,
Gordon

On Tue, Aug 20, 2019 at 3:53 AM Zili Chen  wrote:

> +1 (non-binding)
>
> - build from source: OK(8u212)
> - check local setup tutorial works as expected
>
> Best,
> tison.
>
>
> Yu Li  于2019年8月20日周二 上午8:24写道:
>
> > +1 (non-binding)
> >
> > - checked release notes: OK
> > - checked sums and signatures: OK
> > - repository appears to contain all expected artifacts
> > - source release
> >  - contains no binaries: OK
> >  - contains no 1.9-SNAPSHOT references: OK
> >  - build from source: OK (8u102)
> > - binary release
> >  - no examples appear to be missing
> >  - started a cluster; WebUI reachable, example ran successfully
> > - checked README.md file and found nothing unexpected
> >
> > Best Regards,
> > Yu
> >
> >
> > On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai 
> > wrote:
> >
> > > Hi all,
> > >
> > > Release candidate #3 for Apache Flink 1.9.0 is now ready for your
> review.
> > >
> > > Please review and vote on release candidate #3 for version 1.9.0, as
> > > follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release and binary convenience releases to
> > be
> > > deployed to dist.apache.org [2], which are signed with the key with
> > > fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag “release-1.9.0-rc3” [5].
> > > * pull requests for the release note documentation [6] and announcement
> > > blog post [7].
> > >
> > > As proposed in the RC2 vote thread [8], for RC3 we are only
> > cherry-picking
> > > minimal specific changes on top of RC2 to be able to reasonably carry
> > over
> > > previous testing efforts and effectively require a shorter voting time.
> > >
> > > The only extra commits in this RC, compared to RC2, are the following:
> > > - c2d9aeac [FLINK-13231] [pubsub] Replace Max outstanding
> acknowledgement
> > > ids limit with a FlinkConnectorRateLimiter
> > > - d8941711 [FLINK-13699][table-api] Fix TableFactory doesn’t work with
> > DDL
> > > when containing TIMESTAMP/DATE/TIME types
> > > - 04e95278 [FLINK-13752] Only references necessary variables when
> > > bookkeeping result partitions on TM
> > >
> > > Due to the minimal set of changes, the vote for RC3 will be *open for
> > only
> > > 48 hours*.
> > > Please cast your votes before *Aug. 21st (Wed.) 2019, 17:00 PM CET*.
> > >
> > > It is adopted by majority approval, with at least 3 PMC affirmative
> > votes.
> > >
> > > Thanks,
> > > Gordon
> > >
> > > [1]
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc3/
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1236
> > > [5]
> > >
> > >
> >
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc3
> > > [6] https://github.com/apache/flink/pull/9438
> > > [7] https://github.com/apache/flink-web/pull/244
> > > [8]
> > >
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-Release-1-9-0-release-candidate-2-tp31542p31933.html
> > >
> >
>


Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-19 Thread Zili Chen
+1 (non-binding)

- build from source: OK(8u212)
- check local setup tutorial works as expected

Best,
tison.


Yu Li  于2019年8月20日周二 上午8:24写道:

> +1 (non-binding)
>
> - checked release notes: OK
> - checked sums and signatures: OK
> - repository appears to contain all expected artifacts
> - source release
>  - contains no binaries: OK
>  - contains no 1.9-SNAPSHOT references: OK
>  - build from source: OK (8u102)
> - binary release
>  - no examples appear to be missing
>  - started a cluster; WebUI reachable, example ran successfully
> - checked README.md file and found nothing unexpected
>
> Best Regards,
> Yu
>
>
> On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai 
> wrote:
>
> > Hi all,
> >
> > Release candidate #3 for Apache Flink 1.9.0 is now ready for your review.
> >
> > Please review and vote on release candidate #3 for version 1.9.0, as
> > follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag “release-1.9.0-rc3” [5].
> > * pull requests for the release note documentation [6] and announcement
> > blog post [7].
> >
> > As proposed in the RC2 vote thread [8], for RC3 we are only
> cherry-picking
> > minimal specific changes on top of RC2 to be able to reasonably carry
> over
> > previous testing efforts and effectively require a shorter voting time.
> >
> > The only extra commits in this RC, compared to RC2, are the following:
> > - c2d9aeac [FLINK-13231] [pubsub] Replace Max outstanding acknowledgement
> > ids limit with a FlinkConnectorRateLimiter
> > - d8941711 [FLINK-13699][table-api] Fix TableFactory doesn’t work with
> DDL
> > when containing TIMESTAMP/DATE/TIME types
> > - 04e95278 [FLINK-13752] Only references necessary variables when
> > bookkeeping result partitions on TM
> >
> > Due to the minimal set of changes, the vote for RC3 will be *open for
> only
> > 48 hours*.
> > Please cast your votes before *Aug. 21st (Wed.) 2019, 17:00 PM CET*.
> >
> > It is adopted by majority approval, with at least 3 PMC affirmative
> votes.
> >
> > Thanks,
> > Gordon
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc3/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1236
> > [5]
> >
> >
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc3
> > [6] https://github.com/apache/flink/pull/9438
> > [7] https://github.com/apache/flink-web/pull/244
> > [8]
> >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-Release-1-9-0-release-candidate-2-tp31542p31933.html
> >
>


Re: [VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-19 Thread Yu Li
+1 (non-binding)

- checked release notes: OK
- checked sums and signatures: OK
- repository appears to contain all expected artifacts
- source release
 - contains no binaries: OK
 - contains no 1.9-SNAPSHOT references: OK
 - build from source: OK (8u102)
- binary release
 - no examples appear to be missing
 - started a cluster; WebUI reachable, example ran successfully
- checked README.md file and found nothing unexpected

Best Regards,
Yu


On Tue, 20 Aug 2019 at 01:16, Tzu-Li (Gordon) Tai 
wrote:

> Hi all,
>
> Release candidate #3 for Apache Flink 1.9.0 is now ready for your review.
>
> Please review and vote on release candidate #3 for version 1.9.0, as
> follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag “release-1.9.0-rc3” [5].
> * pull requests for the release note documentation [6] and announcement
> blog post [7].
>
> As proposed in the RC2 vote thread [8], for RC3 we are only cherry-picking
> minimal specific changes on top of RC2 to be able to reasonably carry over
> previous testing efforts and effectively require a shorter voting time.
>
> The only extra commits in this RC, compared to RC2, are the following:
> - c2d9aeac [FLINK-13231] [pubsub] Replace Max outstanding acknowledgement
> ids limit with a FlinkConnectorRateLimiter
> - d8941711 [FLINK-13699][table-api] Fix TableFactory doesn’t work with DDL
> when containing TIMESTAMP/DATE/TIME types
> - 04e95278 [FLINK-13752] Only references necessary variables when
> bookkeeping result partitions on TM
>
> Due to the minimal set of changes, the vote for RC3 will be *open for only
> 48 hours*.
> Please cast your votes before *Aug. 21st (Wed.) 2019, 17:00 PM CET*.
>
> It is adopted by majority approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Gordon
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc3/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1236
> [5]
>
> https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc3
> [6] https://github.com/apache/flink/pull/9438
> [7] https://github.com/apache/flink-web/pull/244
> [8]
>
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-Release-1-9-0-release-candidate-2-tp31542p31933.html
>


[VOTE] Apache Flink 1.9.0, release candidate #3

2019-08-19 Thread Tzu-Li (Gordon) Tai
Hi all,

Release candidate #3 for Apache Flink 1.9.0 is now ready for your review.

Please review and vote on release candidate #3 for version 1.9.0, as
follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be
deployed to dist.apache.org [2], which are signed with the key with
fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag “release-1.9.0-rc3” [5].
* pull requests for the release note documentation [6] and announcement
blog post [7].

As proposed in the RC2 vote thread [8], for RC3 we are only cherry-picking
minimal specific changes on top of RC2 to be able to reasonably carry over
previous testing efforts and effectively require a shorter voting time.

The only extra commits in this RC, compared to RC2, are the following:
- c2d9aeac [FLINK-13231] [pubsub] Replace Max outstanding acknowledgement
ids limit with a FlinkConnectorRateLimiter
- d8941711 [FLINK-13699][table-api] Fix TableFactory doesn’t work with DDL
when containing TIMESTAMP/DATE/TIME types
- 04e95278 [FLINK-13752] Only references necessary variables when
bookkeeping result partitions on TM

Due to the minimal set of changes, the vote for RC3 will be *open for only
48 hours*.
Please cast your votes before *Aug. 21st (Wed.) 2019, 17:00 PM CET*.

It is adopted by majority approval, with at least 3 PMC affirmative votes.

Thanks,
Gordon

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12344601
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.0-rc3/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1236
[5]
https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.9.0-rc3
[6] https://github.com/apache/flink/pull/9438
[7] https://github.com/apache/flink-web/pull/244
[8]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Apache-Flink-Release-1-9-0-release-candidate-2-tp31542p31933.html