Re: [RESULT] [VOTE] Release 1.5.6, release candidate #1

2018-12-22 Thread Chesnay Schepler

I will move the files and handle JIRA.

On 22.12.2018 02:42, Thomas Weise wrote:

Can a PMC member please complete the following:

* svn move -m "Release Flink 1.5.6"
https://dist.apache.org/repos/dist/dev/flink/flink-1.5.6-rc1
https://dist.apache.org/repos/dist/release/flink/flink-1.5.6
* mark the 1.5.6 version in JIRA as released:
https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions

Thanks!


On Fri, Dec 21, 2018 at 5:08 PM Thomas Weise  wrote:


I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 4 of which are binding:
* Chesnay Schepler (binding)
* Aljoscha Krettek (binding)
* Timo Walther (binding)
* Thomas Weise
* Till Rohrmann (binding)

There are no disapproving votes.

Thanks everyone!

On Mon, Dec 17, 2018 at 9:27 PM Thomas Weise  wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version
1.5.6, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to
be deployed to dist.apache.org [2], which are signed with the key with
fingerprint D920A98C [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.6-rc1" [5].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Thanks,
Thomas

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344315
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.6-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1199/
[5]
https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.6-rc1






Re: [RESULT] [VOTE] Release 1.5.6, release candidate #1

2018-12-22 Thread Chesnay Schepler
Done. I will merge the flink-web PR tomorrow, once 24h have passed so 
that the mirrors/maven central can catch up.


On 22.12.2018 09:11, Chesnay Schepler wrote:

I will move the files and handle JIRA.

On 22.12.2018 02:42, Thomas Weise wrote:

Can a PMC member please complete the following:

* svn move -m "Release Flink 1.5.6"
https://dist.apache.org/repos/dist/dev/flink/flink-1.5.6-rc1
https://dist.apache.org/repos/dist/release/flink/flink-1.5.6
* mark the 1.5.6 version in JIRA as released:
https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions 



Thanks!


On Fri, Dec 21, 2018 at 5:08 PM Thomas Weise  wrote:


I'm happy to announce that we have unanimously approved this release.

There are 5 approving votes, 4 of which are binding:
* Chesnay Schepler (binding)
* Aljoscha Krettek (binding)
* Timo Walther (binding)
* Thomas Weise
* Till Rohrmann (binding)

There are no disapproving votes.

Thanks everyone!

On Mon, Dec 17, 2018 at 9:27 PM Thomas Weise  wrote:


Hi everyone,
Please review and vote on the release candidate #1 for the version
1.5.6, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which 
includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience 
releases to

be deployed to dist.apache.org [2], which are signed with the key with
fingerprint D920A98C [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.6-rc1" [5].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Thanks,
Thomas

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344315 


[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.5.6-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1199/ 


[5]
https://gitbox.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.6-rc1 











[jira] [Created] (FLINK-11209) Provide more complete guidance for the log usage documentation

2018-12-22 Thread vinoyang (JIRA)
vinoyang created FLINK-11209:


 Summary: Provide more complete guidance for the log usage 
documentation
 Key: FLINK-11209
 URL: https://issues.apache.org/jira/browse/FLINK-11209
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: vinoyang
Assignee: leesf


The current documentation does not provide detailed guidance when users attempt 
to use logback as the underlying logging framework. Since Flink only contains 
log4j dependency jars. The documentation says that if you want to switch to 
logback, you only need to exclude the log4j dependency, but it does not remind 
the user to add a dependency on the logback. I hope to add the following 
information:
 * Remove the dependencies of log4j and slf4j-log4j in the lib folder;
 * Where to download the logback-* jar
 * Introduce logback-core/logback-classic/logback-access dependencies



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Flink 1.7 doesn't work with Kafka Table Source Descriptors

2018-12-22 Thread Hequn Cheng
Hi dhanuka,

I failed to reproduce your error with release-1.7.0. It
seems Kafka.toConnectorProperties() should be called instead
of ConnectorDescriptor.toConnectorProperties(), the latter one is an
abstract class, which lead to the AbstractMethodError.

>From the picture uploaded, it is strange that the jar of 1.6.1 is mixed
with the jar of 1.7.0. It may result in class conflict problem.
Furthermore, set flink dependency scope to provided, so that classes of
flink will not be packaged into the user jar. It will also cause class
conflict problem.

Best,
Hequn


On Fri, Dec 21, 2018 at 6:24 PM dhanuka ranasinghe <
dhanuka.priyan...@gmail.com> wrote:

> Add Dev Group
>
> On Fri, Dec 21, 2018 at 6:21 PM dhanuka ranasinghe <
> dhanuka.priyan...@gmail.com> wrote:
>
>> Hi All,
>>
>> I have tried to read data from Kafka from Flink using Table API. It's
>> working fine with Flink 1.4 but when upgrade to 1.7 given me below error. I
>> have attached the libraries added to Flink.
>>
>> Could you please help me on this.
>>
>> bin/flink run stream-analytics-0.0.1-SNAPSHOT.jar --read-topic testin
>> --write-topic testout --bootstrap.servers localhost --group.id analytics
>> Starting execution of program
>> java.lang.AbstractMethodError:
>> org.apache.flink.table.descriptors.ConnectorDescriptor.toConnectorProperties()Ljava/util/Map;
>> at
>> org.apache.flink.table.descriptors.ConnectorDescriptor.toProperties(ConnectorDescriptor.java:58)
>> at
>> org.apache.flink.table.descriptors.ConnectTableDescriptor.toProperties(ConnectTableDescriptor.scala:107)
>> at
>> org.apache.flink.table.descriptors.StreamTableDescriptor.toProperties(StreamTableDescriptor.scala:95)
>> at
>> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:39)
>> at
>> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
>> at
>> org.monitoring.stream.analytics.FlinkTableSourceLatest.main(FlinkTableSourceLatest.java:82)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>> at
>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
>> at
>> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>> at
>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
>> at
>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>> at
>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
>> at
>> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>> at
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
>>
>> Cheers,
>> Dhanuka
>>
>> --
>> Nothing Impossible,Creativity is more important than knowledge.
>>
>
>
> --
> Nothing Impossible,Creativity is more important than knowledge.
>


Re: [VOTE] Release 1.6.3, release candidate #1

2018-12-22 Thread Tzu-Li (Gordon) Tai
+1

- checked signatures
- locally executed build, without Hadoop, Scala 2.11
- locally executed e2e tests, looped 10 times with all attempts passing
- Staged artifacts are complete for releasing
- Changes from 1.6.2 to 1.6.3 all seem sane, IMO

Cheers,
Gordon

On 21 December 2018 at 12:49:23 AM, Till Rohrmann (trohrm...@apache.org) wrote:

+1  

- checked signatures and checksums  
- checked that no dependency changes have occurred between 1.6.2 and 1.6.3  
- build Flink from source release with Hadoop 2.8.5  
- executed all tests via `mvn verify -Dhadoop.version=2.8.5`  
- started standalone cluster and tried out WindowJoin and WordCount (batch)  
example  
- executed flink-end-to-end-tests/run-nightly-tests.sh  

Cheers,  
Till  

On Wed, Dec 19, 2018 at 9:07 PM Timo Walther  wrote:  

> +1  
>  
> - manually checked the commit diff and could not spot any issues  
> - run mvn clean verify locally with success  
> - run a couple of e2e tests locally with success  
>  
> Thanks,  
> Timo  
>  
> Am 19.12.18 um 18:28 schrieb Aljoscha Krettek:  
> > +1  
> >  
> > - signatures/hashes are ok  
> > - verified that the log contains no suspicious output when running a  
> local cluster  
> >  
> >> On 18. Dec 2018, at 14:31, Chesnay Schepler  wrote:  
> >>  
> >> +1  
> >>  
> >> - signatures ok  
> >> - src contains no binaries  
> >> - binary not missing any jars  
> >> - tag exists  
> >> - release notes classification/names seem appropriate  
> >> - maven artifacts not missing any jars  
> >>  
> >> On 18.12.2018 11:15, Tzu-Li (Gordon) Tai wrote:  
> >>> Hi everyone,  
> >>>  
> >>> Please review and vote on the release candidate #1 for the version  
> 1.6.3, as follows:  
> >>> [ ] +1, Approve the release  
> >>> [ ] -1, Do not approve the release (please provide specific comments)  
> >>>  
> >>>  
> >>> The complete staging area is available for your review, which includes:  
> >>> * JIRA release notes [1],  
> >>> * the official Apache source release and binary convenience releases  
> to be deployed to dist.apache.org [2], which are signed with the key with  
> fingerprint 1C1E2394D3194E1944613488F320986D35C33D6A [3],  
> >>> * all artifacts to be deployed to the Maven Central Repository [4],  
> >>> * source code tag “release-1.6.3-rc1” [5],  
> >>> * website pull request listing the new release and adding announcement  
> blog post [6].  
> >>>  
> >>> The vote will be open for at least 72 hours. It is adopted by majority  
> approval, with at least 3 PMC affirmative votes.  
> >>>  
> >>> Thanks,  
> >>> Gordon  
> >>>  
> >>> [1]  
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344314
>   
> >>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.6.3-rc1/  
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS  
> >>> [4]  
> https://repository.apache.org/content/repositories/orgapacheflink-1202  
> >>> [5]  
> https://gitbox.apache.org/repos/asf?p=flink.git;a=commit;h=54e6cde28493baf35315946fd023ecbe692c95d8
>   
> >>> [6] https://github.com/apache/flink-web/pull/141  
> >>>  
> >>>  
>  
>  


Re: Flink 1.7 doesn't work with Kafka Table Source Descriptors

2018-12-22 Thread dhanuka ranasinghe
Hi Cheng,

Thanks for your reply will try out and update you on this.

Cheers,
Dhanuka


On Sat, 22 Dec 2018, 20:41 Hequn Cheng  Hi dhanuka,
>
> I failed to reproduce your error with release-1.7.0. It
> seems Kafka.toConnectorProperties() should be called instead
> of ConnectorDescriptor.toConnectorProperties(), the latter one is an
> abstract class, which lead to the AbstractMethodError.
>
> From the picture uploaded, it is strange that the jar of 1.6.1 is mixed
> with the jar of 1.7.0. It may result in class conflict problem.
> Furthermore, set flink dependency scope to provided, so that classes of
> flink will not be packaged into the user jar. It will also cause class
> conflict problem.
>
> Best,
> Hequn
>
>
> On Fri, Dec 21, 2018 at 6:24 PM dhanuka ranasinghe <
> dhanuka.priyan...@gmail.com> wrote:
>
>> Add Dev Group
>>
>> On Fri, Dec 21, 2018 at 6:21 PM dhanuka ranasinghe <
>> dhanuka.priyan...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I have tried to read data from Kafka from Flink using Table API. It's
>>> working fine with Flink 1.4 but when upgrade to 1.7 given me below error. I
>>> have attached the libraries added to Flink.
>>>
>>> Could you please help me on this.
>>>
>>> bin/flink run stream-analytics-0.0.1-SNAPSHOT.jar --read-topic testin
>>> --write-topic testout --bootstrap.servers localhost --group.id analytics
>>> Starting execution of program
>>> java.lang.AbstractMethodError:
>>> org.apache.flink.table.descriptors.ConnectorDescriptor.toConnectorProperties()Ljava/util/Map;
>>> at
>>> org.apache.flink.table.descriptors.ConnectorDescriptor.toProperties(ConnectorDescriptor.java:58)
>>> at
>>> org.apache.flink.table.descriptors.ConnectTableDescriptor.toProperties(ConnectTableDescriptor.scala:107)
>>> at
>>> org.apache.flink.table.descriptors.StreamTableDescriptor.toProperties(StreamTableDescriptor.scala:95)
>>> at
>>> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:39)
>>> at
>>> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
>>> at
>>> org.monitoring.stream.analytics.FlinkTableSourceLatest.main(FlinkTableSourceLatest.java:82)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at
>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>>> at
>>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
>>> at
>>> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>>> at
>>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
>>> at
>>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
>>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>>> at
>>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
>>> at
>>> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>>> at
>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>> at
>>> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
>>>
>>> Cheers,
>>> Dhanuka
>>>
>>> --
>>> Nothing Impossible,Creativity is more important than knowledge.
>>>
>>
>>
>> --
>> Nothing Impossible,Creativity is more important than knowledge.
>>
>


[RESULT] [VOTE] Release 1.6.3, release candidate #1

2018-12-22 Thread Tzu-Li (Gordon) Tai
I'm happy to announce that we have unanimously approved this release. 

There are 5 approving votes, all of which are binding: 
* Chesnay Schepler (binding) 
* Aljoscha Krettek (binding) 
* Timo Walther (binding) 
* Till Rohrmann (binding)
* Tzu-Li Tai (binding) 

There are no disapproving votes. 

Thanks everyone!

I'll now continue to finalize and release version 1.6.3.
Once the release is out, I'll announce this in a separate thread.



--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


[jira] [Created] (FLINK-11210) Enable auto state cleanup strategy for RANGE OVER

2018-12-22 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-11210:
---

 Summary: Enable auto state cleanup strategy for RANGE OVER
 Key: FLINK-11210
 URL: https://issues.apache.org/jira/browse/FLINK-11210
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Hequn Cheng
Assignee: Hequn Cheng


As FLINK-11188 discussed, OVER RANGE window should automatically clean up its 
state instead of relying on the state retention cleanup strategy. 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11211) ReinterpretDataStreamAsKeyedStreamITCase failed on Travis

2018-12-22 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-11211:


 Summary: ReinterpretDataStreamAsKeyedStreamITCase failed on Travis
 Key: FLINK-11211
 URL: https://issues.apache.org/jira/browse/FLINK-11211
 Project: Flink
  Issue Type: Bug
  Components: DataStream API, Tests
Affects Versions: 1.8.0
Reporter: Chesnay Schepler


https://travis-ci.org/apache/flink/jobs/471312261
{code}
13:33:58.968 [INFO] Running 
org.apache.flink.streaming.api.datastream.ReinterpretDataStreamAsKeyedStreamITCase
13:34:08.615 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 9.645 s <<< FAILURE! - in 
org.apache.flink.streaming.api.datastream.ReinterpretDataStreamAsKeyedStreamITCase
13:34:08.615 [ERROR] 
testReinterpretAsKeyedStream(org.apache.flink.streaming.api.datastream.ReinterpretDataStreamAsKeyedStreamITCase)
  Time elapsed: 9.434 s  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.streaming.api.datastream.ReinterpretDataStreamAsKeyedStreamITCase.testReinterpretAsKeyedStream(ReinterpretDataStreamAsKeyedStreamITCase.java:107)
Caused by: java.lang.AssertionError: expected:<300> but was:<301>
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[ANNOUNCE] Apache Flink 1.7.1 released

2018-12-22 Thread Chesnay Schepler
The Apache Flink community is very happy to announce the release of 
Apache Flink 1.7.1, which is the first bugfix release for the Apache 
Flink 1.7 series.


Apache Flink® is an open-source stream processing framework for 
distributed, high-performing, always-available, and accurate data 
streaming applications.


The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the 
improvements for this bugfix release:

https://flink.apache.org/news/2018/12/21/release-1.7.1.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344412

We would like to thank all contributors of the Apache Flink community 
who made this release possible!


Regards,
Chesnay