buildbot failure in on flink-docs-release-1.3

2018-08-06 Thread buildbot
The Buildbot has detected a new failure on builder flink-docs-release-1.3 while 
building . Full details are available at:
https://ci.apache.org/builders/flink-docs-release-1.3/builds/443

Buildbot URL: https://ci.apache.org/

Buildslave for this Build: bb_slave2_ubuntu

Build Reason: The Nightly scheduler named 'flink-nightly-docs-release-1.3' 
triggered this build
Build Source Stamp: [branch release-1.3] HEAD
Blamelist: 

BUILD FAILED: failed Build Java & Scala docs

Sincerely,
 -The Buildbot





buildbot failure in on flink-docs-release-1.4

2018-08-06 Thread buildbot
The Buildbot has detected a new failure on builder flink-docs-release-1.4 while 
building . Full details are available at:
https://ci.apache.org/builders/flink-docs-release-1.4/builds/276

Buildbot URL: https://ci.apache.org/

Buildslave for this Build: bb_slave2_ubuntu

Build Reason: The Nightly scheduler named 'flink-nightly-docs-release-1.4' 
triggered this build
Build Source Stamp: [branch release-1.4] HEAD
Blamelist: 

BUILD FAILED: failed Build Java & Scala docs

Sincerely,
 -The Buildbot





svn commit: r28582 - /dev/flink/flink-1.6.0/

2018-08-06 Thread trohrmann
Author: trohrmann
Date: Mon Aug  6 21:05:10 2018
New Revision: 28582

Log:
Add 1.6.0-rc3 release files

Added:
dev/flink/flink-1.6.0/
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz   (with props)
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.asc
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.sha512
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz   (with props)
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz.asc
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz.sha512
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop27-scala_2.11.tgz   (with props)
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop27-scala_2.11.tgz.asc
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop27-scala_2.11.tgz.sha512
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop28-scala_2.11.tgz   (with props)
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop28-scala_2.11.tgz.asc
dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop28-scala_2.11.tgz.sha512
dev/flink/flink-1.6.0/flink-1.6.0-bin-scala_2.11.tgz   (with props)
dev/flink/flink-1.6.0/flink-1.6.0-bin-scala_2.11.tgz.asc
dev/flink/flink-1.6.0/flink-1.6.0-bin-scala_2.11.tgz.sha512
dev/flink/flink-1.6.0/flink-1.6.0-src.tgz   (with props)
dev/flink/flink-1.6.0/flink-1.6.0-src.tgz.asc
dev/flink/flink-1.6.0/flink-1.6.0-src.tgz.sha512

Added: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz
--
svn:mime-type = application/octet-stream

Added: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.asc
==
--- dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.asc (added)
+++ dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.asc Mon Aug  
6 21:05:10 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEENFF/bqllF4fQbsqJHzAlaals/9UFAltorhAACgkQHzAlaals
+/9UdBBAAjd4XrEmJzJ/Plt7zpX63IGKXo1z/vMhyLORnsOQ66+I/gvRM+llnIuQ6
+GbyXYKITW1cMPT8/9PV5k9WnD+aRUGkbMxmN3EDhrG1IoX2rUtc4MYsU1Dp8QveV
+7KNkFs9C9VUy8XDvuJ1sNGyEdks7KLaMKupTCM/4P21URViqqgWx5//O+ihzcEa4
+bnLfwD4r+Ip32hB3EY02SLqb6z0xzNnQF0u0YFL0aVUhSH13jt7OIIeNhgXAh/HB
+KC4viZvgvYOcA1RSESAmdaaVV9sFDq02kw1kahTTHeS/3Iwoy6ItrgZd5nXZTS6D
+e8ehho8LfdASvVRS5AOEE25iwmX5JSpyg7I/B51srIZoAL3lE1TP37Fbm3krnlsb
+VsY39oi9U/uHDDK5AzO1OY8qBPtAtYyWySGpw+v2kZp3ezxfsth+bTw4Smjzra7y
+Khdr6f7H1iiP7+Twsr/E7v5F9CamND1zV8met1wlGDwlU7enT2G7sY2lHob7qSEt
+zJ1064eROfMkE23A3mL7BQChwy4sDRtjHWAfEaSwq0eT13LJbahDFyOYXKEanXYI
+l03Fy7f61fHHyFhouM0850jrMZz2jrAUM6CpVB47S38V4n9RoyX/hR6fmzx+skrY
+mQOn4U+JTPll06ygDErsIaDQDcCYAI9HxRDvftV70qWtdV6DhO0=
+=S/Bg
+-END PGP SIGNATURE-

Added: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.sha512
==
--- dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.sha512 (added)
+++ dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop24-scala_2.11.tgz.sha512 Mon 
Aug  6 21:05:10 2018
@@ -0,0 +1 @@
+ac4645e3cba0bafffad0537848d987429f687259e7726b9a567da69cce1e17e9962fae0f095459cb19626757441dfaaa09b5fa5139986284984de7f236a68715
  flink-1.6.0-bin-hadoop24-scala_2.11.tgz

Added: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz
==
Binary file - no diff available.

Propchange: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz
--
svn:mime-type = application/octet-stream

Added: dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz.asc
==
--- dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz.asc (added)
+++ dev/flink/flink-1.6.0/flink-1.6.0-bin-hadoop26-scala_2.11.tgz.asc Mon Aug  
6 21:05:10 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEENFF/bqllF4fQbsqJHzAlaals/9UFAltorhEACgkQHzAlaals
+/9UyOg/6AgtV4UuzJSYEi4R6HEhPfGyswoyZhzkbHd+B+KmIQ12j9JJZvER460vj
+bmCQ6Ithqhb50xQxwk70AIYyd6HJYErWxcQJnLQcPtJZpCEk2l+wI3rg+XqcFx1e
+T5vsngFSJzzJKFt2K/ozfOCgIAdFv7E3t+9VLyqrNhAo4Zcp44BbjrxvVvB/7JHH
+vrLaYWnGrEHAS2MUQqIfy55rcpWfGTDuVRFYb+RWLEQlefVta4RDR/SVSv3IngVT
+dPC71gICFWLhXZqB9YXhnyt+ylhiQHkibR5j90CfjKmMg1V4QtqY7v3HMDv8NS92
+MlkrIC5tAaPeY7Mz54OKIX5KfBDFfx/EDe8eCcNIfghagU1VKhfoSxSEjfl3rgtX
+O+t2igsFkhH+JUWaLZC0pc2bkWDZd4J8l1EjzlBoOXEj6FYt0/bDRU/M8PdMhZrP
+BycZI+Tw0rP+H5orOBRj2BMt5gS1/Y0wbgHLu8yqiC/9UUSx0sMrPz5BxfCEhFo6
+W7H5Jzs5bU9JTwtBlof1W/gx3Uooo72fL9OQNTGa64CuInV33gu2x0GLv43rskXs
+fS8ScMCqD5sLEu95u3zrDOaQ6ToyD0daEsBiIZYRnrQ0jFo3ci/GMouIdN5IlTmP
+cmIDCkXeRvEYIgA75KSLggI

svn commit: r28580 - /dev/flink/flink-1.6.0/

2018-08-06 Thread trohrmann
Author: trohrmann
Date: Mon Aug  6 20:19:14 2018
New Revision: 28580

Log:
Remove 1.6.0-rc2 release files

Removed:
dev/flink/flink-1.6.0/



[flink-web] branch asf-site updated: Rebuild website

2018-08-06 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 3fcfa9c  Rebuild website
3fcfa9c is described below

commit 3fcfa9cd42d49fc1e0a6c625151ed9ae2595ddfb
Author: Fabian Hueske 
AuthorDate: Mon Aug 6 19:07:08 2018 +0200

Rebuild website
---
 content/img/poweredby/comcast-logo.png | Bin 0 -> 33410 bytes
 content/index.html |   6 ++
 content/poweredby.html |   4 
 3 files changed, 10 insertions(+)

diff --git a/content/img/poweredby/comcast-logo.png 
b/content/img/poweredby/comcast-logo.png
new file mode 100644
index 000..c7f15c5c
Binary files /dev/null and b/content/img/poweredby/comcast-logo.png differ
diff --git a/content/index.html b/content/index.html
index 73fdad8..bf60d27 100644
--- a/content/index.html
+++ b/content/index.html
@@ -309,6 +309,12 @@
 
 
   
+
+  
+
+
+
+  
 
   
 
diff --git a/content/poweredby.html b/content/poweredby.html
index 271b8e8..c3f9e38 100644
--- a/content/poweredby.html
+++ b/content/poweredby.html
@@ -170,6 +170,10 @@
   Capital One, a Fortune 500 financial services company, uses Flink for 
real-time activity monitoring and alerting. https://www.slideshare.net/FlinkForward/flink-forward-san-francisco-2018-andrew-gao-jeff-sharpe-finding-bad-acorns";
 target="_blank"> Learn about Capital One's fraud detection 
use case


+  
+  Comcast, a global media and technology company, uses Flink for 
operationalizing machine learning models and near-real-time event stream 
processing. https://www.slideshare.net/FlinkForward/flink-forward-san-francisco-2018-dave-torok-sameer-wadkar-embedding-flink-throughout-an-operationalized-streaming-ml-lifecycle";
 target="_blank"> Learn about Flink at Comcast
+   
+   
   
   Drivetribe, a digital community founded by the former hosts of “Top 
Gear”, uses Flink for metrics and content recommendations. https://data-artisans.com/drivetribe-cqrs-apache-flink/"; 
target="_blank"> Read about Flink in the Drivetribe 
stack




[flink] branch release-1.6 updated: [FLINK-10071] [docs] Document usage of INSERT INTO in SQL Client

2018-08-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.6 by this push:
 new 2915811  [FLINK-10071] [docs] Document usage of INSERT INTO in SQL 
Client
2915811 is described below

commit 29158113432251f9bc6b004dcafd2338d8cf7ee6
Author: Timo Walther 
AuthorDate: Mon Aug 6 15:26:51 2018 +0200

[FLINK-10071] [docs] Document usage of INSERT INTO in SQL Client

This closes #6505.
---
 docs/dev/table/sqlClient.md | 68 +
 1 file changed, 63 insertions(+), 5 deletions(-)

diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index b735705..d35aa59 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -106,7 +106,9 @@ Alice, 1
 Greg, 1
 {% endhighlight %}
 
-The [configuration section](sqlClient.html#configuration) explains how to read 
from table sources and configure other table program properties.
+Both result modes can be useful during the prototyping of SQL queries.
+
+After a query is defined, it can be submitted to the cluster as a 
long-running, detached Flink job. For this, a target system that stores the 
results needs to be specified using the [INSERT INTO 
statement](sqlClient.html#detached-sql-queries). The [configuration 
section](sqlClient.html#configuration) explains how to declare table sources 
for reading data, how to declare table sinks for writing data, and how to 
configure other table program properties.
 
 {% top %}
 
@@ -161,7 +163,7 @@ Every environment file is a regular [YAML 
file](http://yaml.org/). An example of
 # Define table sources here.
 
 tables:
-  - name: MyTableName
+  - name: MyTableSource
 type: source
 connector:
   type: filesystem
@@ -286,8 +288,8 @@ Both `connector` and `format` allow to define a property 
version (which is curre
 
 {% top %}
 
-User-defined Functions
-
+### User-defined Functions
+
 The SQL Client allows users to create custom, user-defined functions to be 
used in SQL queries. Currently, these functions are restricted to be defined 
programmatically in Java/Scala classes.
 
 In order to provide a user-defined function, you need to first implement and 
compile a function class that extends `ScalarFunction`, `AggregateFunction` or 
`TableFunction` (see [User-defined Functions]({{ site.baseurl 
}}/dev/table/udfs.html)). One or more functions can then be packaged into a 
dependency JAR for the SQL Client.
@@ -313,7 +315,7 @@ functions:
 
 Make sure that the order and types of the specified parameters strictly match 
one of the constructors of your function class.
 
-### Constructor Parameters
+ Constructor Parameters
 
 Depending on the user-defined function, it might be necessary to parameterize 
the implementation before using it in SQL statements.
 
@@ -369,6 +371,62 @@ This process can be recursively performed until all the 
constructor parameters a
 
 {% top %}
 
+Detached SQL Queries
+
+
+In order to define end-to-end SQL pipelines, SQL's `INSERT INTO` statement can 
be used for submitting long-running, detached queries to a Flink cluster. These 
queries produce their results into an external system instead of the SQL 
Client. This allows for dealing with higher parallelism and larger amounts of 
data. The CLI itself does not have any control over a detached query after 
submission.
+
+{% highlight sql %}
+INSERT INTO MyTableSink SELECT * FROM MyTableSource
+{% endhighlight %}
+
+The table sink `MyTableSink` has to be declared in the environment file. See 
the [connection page](connect.html) for more information about supported 
external systems and their configuration. An example for an Apache Kafka table 
sink is shown below.
+
+{% highlight yaml %}
+tables:
+  - name: MyTableSink
+type: sink
+update-mode: append
+connector:
+  property-version: 1
+  type: kafka
+  version: 0.11
+  topic: OutputTopic
+  properties:
+- key: zookeeper.connect
+  value: localhost:2181
+- key: bootstrap.servers
+  value: localhost:9092
+- key: group.id
+  value: testGroup
+format:
+  property-version: 1
+  type: json
+  derive-schema: true
+schema:
+  - name: rideId
+type: LONG
+  - name: lon
+type: FLOAT
+  - name: lat
+type: FLOAT
+  - name: rideTime
+type: TIMESTAMP
+{% endhighlight %}
+
+The SQL Client makes sure that a statement is successfully submitted to the 
cluster. Once the query is submitted, the CLI will show information about the 
Flink job.
+
+{% highlight text %}
+[INFO] Table update statement has been successfully submitted to the cluster:
+Cluster ID: StandaloneClusterId
+Job ID: 6f922fe5cba87406ff23ae4a7bb79044
+Web interface: http://localhost:8081
+{% endhighlight %}
+

[flink] branch master updated: [FLINK-10071] [docs] Document usage of INSERT INTO in SQL Client

2018-08-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 5b7a64f  [FLINK-10071] [docs] Document usage of INSERT INTO in SQL 
Client
5b7a64f is described below

commit 5b7a64fcef3ad09df1ccc0f91386766ee553bf45
Author: Timo Walther 
AuthorDate: Mon Aug 6 15:26:51 2018 +0200

[FLINK-10071] [docs] Document usage of INSERT INTO in SQL Client

This closes #6505.
---
 docs/dev/table/sqlClient.md | 68 +
 1 file changed, 63 insertions(+), 5 deletions(-)

diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index b735705..d35aa59 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -106,7 +106,9 @@ Alice, 1
 Greg, 1
 {% endhighlight %}
 
-The [configuration section](sqlClient.html#configuration) explains how to read 
from table sources and configure other table program properties.
+Both result modes can be useful during the prototyping of SQL queries.
+
+After a query is defined, it can be submitted to the cluster as a 
long-running, detached Flink job. For this, a target system that stores the 
results needs to be specified using the [INSERT INTO 
statement](sqlClient.html#detached-sql-queries). The [configuration 
section](sqlClient.html#configuration) explains how to declare table sources 
for reading data, how to declare table sinks for writing data, and how to 
configure other table program properties.
 
 {% top %}
 
@@ -161,7 +163,7 @@ Every environment file is a regular [YAML 
file](http://yaml.org/). An example of
 # Define table sources here.
 
 tables:
-  - name: MyTableName
+  - name: MyTableSource
 type: source
 connector:
   type: filesystem
@@ -286,8 +288,8 @@ Both `connector` and `format` allow to define a property 
version (which is curre
 
 {% top %}
 
-User-defined Functions
-
+### User-defined Functions
+
 The SQL Client allows users to create custom, user-defined functions to be 
used in SQL queries. Currently, these functions are restricted to be defined 
programmatically in Java/Scala classes.
 
 In order to provide a user-defined function, you need to first implement and 
compile a function class that extends `ScalarFunction`, `AggregateFunction` or 
`TableFunction` (see [User-defined Functions]({{ site.baseurl 
}}/dev/table/udfs.html)). One or more functions can then be packaged into a 
dependency JAR for the SQL Client.
@@ -313,7 +315,7 @@ functions:
 
 Make sure that the order and types of the specified parameters strictly match 
one of the constructors of your function class.
 
-### Constructor Parameters
+ Constructor Parameters
 
 Depending on the user-defined function, it might be necessary to parameterize 
the implementation before using it in SQL statements.
 
@@ -369,6 +371,62 @@ This process can be recursively performed until all the 
constructor parameters a
 
 {% top %}
 
+Detached SQL Queries
+
+
+In order to define end-to-end SQL pipelines, SQL's `INSERT INTO` statement can 
be used for submitting long-running, detached queries to a Flink cluster. These 
queries produce their results into an external system instead of the SQL 
Client. This allows for dealing with higher parallelism and larger amounts of 
data. The CLI itself does not have any control over a detached query after 
submission.
+
+{% highlight sql %}
+INSERT INTO MyTableSink SELECT * FROM MyTableSource
+{% endhighlight %}
+
+The table sink `MyTableSink` has to be declared in the environment file. See 
the [connection page](connect.html) for more information about supported 
external systems and their configuration. An example for an Apache Kafka table 
sink is shown below.
+
+{% highlight yaml %}
+tables:
+  - name: MyTableSink
+type: sink
+update-mode: append
+connector:
+  property-version: 1
+  type: kafka
+  version: 0.11
+  topic: OutputTopic
+  properties:
+- key: zookeeper.connect
+  value: localhost:2181
+- key: bootstrap.servers
+  value: localhost:9092
+- key: group.id
+  value: testGroup
+format:
+  property-version: 1
+  type: json
+  derive-schema: true
+schema:
+  - name: rideId
+type: LONG
+  - name: lon
+type: FLOAT
+  - name: lat
+type: FLOAT
+  - name: rideTime
+type: TIMESTAMP
+{% endhighlight %}
+
+The SQL Client makes sure that a statement is successfully submitted to the 
cluster. Once the query is submitted, the CLI will show information about the 
Flink job.
+
+{% highlight text %}
+[INFO] Table update statement has been successfully submitted to the cluster:
+Cluster ID: StandaloneClusterId
+Job ID: 6f922fe5cba87406ff23ae4a7bb79044
+Web interface: http://localhost:8081
+{% endhighlight %}
+
+Attention

[flink-web] branch asf-site updated: Add Comcast to Powered By (#116)

2018-08-06 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ec66e5d  Add Comcast to Powered By (#116)
ec66e5d is described below

commit ec66e5d0d2ac509f5379f6dddb7f1d1a7255f6d0
Author: Fabian Hueske 
AuthorDate: Mon Aug 6 17:22:46 2018 +0200

Add Comcast to Powered By (#116)
---
 img/poweredby/comcast-logo.png | Bin 0 -> 33410 bytes
 index.md   |   6 ++
 poweredby.md   |   4 
 3 files changed, 10 insertions(+)

diff --git a/img/poweredby/comcast-logo.png b/img/poweredby/comcast-logo.png
new file mode 100644
index 000..c7f15c5c
Binary files /dev/null and b/img/poweredby/comcast-logo.png differ
diff --git a/index.md b/index.md
index 03a7d04..da3d7bb 100755
--- a/index.md
+++ b/index.md
@@ -176,6 +176,12 @@ layout: base
 
 
   
+
+  
+
+
+
+  
 
   
 
diff --git a/poweredby.md b/poweredby.md
index 52db40e..53fae66 100755
--- a/poweredby.md
+++ b/poweredby.md
@@ -33,6 +33,10 @@ If you would you like to be included on this page, please 
reach out to the [Flin
   Capital One, a Fortune 500 financial services company, uses Flink for 
real-time activity monitoring and alerting. https://www.slideshare.net/FlinkForward/flink-forward-san-francisco-2018-andrew-gao-jeff-sharpe-finding-bad-acorns";
 target='_blank'> Learn about Capital One's fraud detection 
use case


+  
+  Comcast, a global media and technology company, uses Flink for 
operationalizing machine learning models and near-real-time event stream 
processing. https://www.slideshare.net/FlinkForward/flink-forward-san-francisco-2018-dave-torok-sameer-wadkar-embedding-flink-throughout-an-operationalized-streaming-ml-lifecycle";
 target='_blank'> Learn about Flink at Comcast
+   
+   
   
   Drivetribe, a digital community founded by the former hosts of “Top 
Gear”, uses Flink for metrics and content recommendations. https://data-artisans.com/drivetribe-cqrs-apache-flink/"; 
target='_blank'> Read about Flink in the Drivetribe 
stack




[flink] annotated tag release-1.6.0-rc3 created (now de15a5d)

2018-08-06 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to annotated tag release-1.6.0-rc3
in repository https://gitbox.apache.org/repos/asf/flink.git.


  at de15a5d  (tag)
 tagging 4a28a63fc538f0b16c3d3cdf570d84a15488602f (commit)
 replaces pre-apache-rename
  by Till Rohrmann
  on Mon Aug 6 17:07:16 2018 +0200

- Log -
Commit for release 1.6.0
-BEGIN PGP SIGNATURE-

iQIzBAABCAAdFiEENFF/bqllF4fQbsqJHzAlaals/9UFAltoZCQACgkQHzAlaals
/9U6cw//YhcXxppPq9rU7s5AYMd0cUOUUR/b+D652FzxvGSFbVknKzfKtBF4jftl
+NTP1zncEBoXegoppgBcN8Cee/qxn7zVSN8upBQdqPHmXZS2+hJ7e0npNWGz/nSy
d40U1WrVYYBGiujBeZlBP4h+yNkZmqoBzP5B9t+1em7SWZ3wmf5rtTzn2SL03dId
6ZK/wVxcR/BVZBvuHD38JbKWISEM5ZgPwYcF46pnAq9GA8p8IdUruYvP6Sx25naO
9JJqGSYQxOvoQQO8QFJT2nxNI0PYALo3HtmKmzOI1VriQcAI5wg4RHKNF6oN4j0X
KYnDN1yl3UF7ByEVQxuCWeudYy/NjDbEL0VQBf4BD3AYT6MhQKme881lmOW39ybX
GrAlwF9v3ZJAK73nOrNWocUPCLj/5ZlPGihs+JXfxdCRZm/FvvzpoVwSQ/u0cla3
SMGHePmXSy67Jy0ydCgTxXFDYkKKUofAwUyfBWTXTxBhF9tIylot/3eitjgsjs5X
YKkvop8E6YbwILy8snhaTtrEr56hBU6dN5HnrtcXl0TJ/6yVwTZ0AIMgmiLz2IEg
DJtZs1WHmOIpr+DmjhiHDiD/Tb2kI/I3zu2uf19U8Cu8GG0JZd4j7gm2UhVTuIef
HDb0/O6mAyxbACiA44aClETlR7r6x58Wccjng3FuexRYN+yPmxM=
=r/WM
-END PGP SIGNATURE-
---

This annotated tag includes the following new commits:

 new 4a28a63  Commit for release 1.6.0

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[flink] 01/01: Commit for release 1.6.0

2018-08-06 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to annotated tag release-1.6.0-rc3
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4a28a63fc538f0b16c3d3cdf570d84a15488602f
Author: Till Rohrmann 
AuthorDate: Mon Aug 6 17:07:06 2018 +0200

Commit for release 1.6.0
---
 docs/_config.yml  | 8 
 flink-annotations/pom.xml | 2 +-
 flink-clients/pom.xml | 2 +-
 flink-connectors/flink-connector-cassandra/pom.xml| 2 +-
 flink-connectors/flink-connector-elasticsearch-base/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch/pom.xml| 2 +-
 flink-connectors/flink-connector-elasticsearch2/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch5/pom.xml   | 2 +-
 flink-connectors/flink-connector-elasticsearch6/pom.xml   | 2 +-
 flink-connectors/flink-connector-filesystem/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-0.10/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-0.11/pom.xml   | 2 +-
 flink-connectors/flink-connector-kafka-0.8/pom.xml| 2 +-
 flink-connectors/flink-connector-kafka-0.9/pom.xml| 2 +-
 flink-connectors/flink-connector-kafka-base/pom.xml   | 2 +-
 flink-connectors/flink-connector-kinesis/pom.xml  | 2 +-
 flink-connectors/flink-connector-nifi/pom.xml | 2 +-
 flink-connectors/flink-connector-rabbitmq/pom.xml | 2 +-
 flink-connectors/flink-connector-twitter/pom.xml  | 2 +-
 flink-connectors/flink-hadoop-compatibility/pom.xml   | 2 +-
 flink-connectors/flink-hbase/pom.xml  | 2 +-
 flink-connectors/flink-hcatalog/pom.xml   | 2 +-
 flink-connectors/flink-jdbc/pom.xml   | 2 +-
 flink-connectors/flink-orc/pom.xml| 2 +-
 flink-connectors/pom.xml  | 2 +-
 flink-container/pom.xml   | 2 +-
 flink-contrib/flink-connector-wikiedits/pom.xml   | 2 +-
 flink-contrib/flink-storm-examples/pom.xml| 2 +-
 flink-contrib/flink-storm/pom.xml | 2 +-
 flink-contrib/pom.xml | 2 +-
 flink-core/pom.xml| 2 +-
 flink-dist/pom.xml| 2 +-
 flink-docs/pom.xml| 2 +-
 flink-end-to-end-tests/flink-bucketing-sink-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-confluent-schema-registry/pom.xml| 2 +-
 flink-end-to-end-tests/flink-dataset-allround-test/pom.xml| 2 +-
 flink-end-to-end-tests/flink-datastream-allround-test/pom.xml | 2 +-
 .../flink-distributed-cache-via-blob-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-elasticsearch1-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch2-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch5-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-elasticsearch6-test/pom.xml  | 2 +-
 .../flink-high-parallelism-iterations-test/pom.xml| 2 +-
 .../flink-local-recovery-and-allocation-test/pom.xml  | 2 +-
 .../flink-parent-child-classloading-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-queryable-state-test/pom.xml | 2 +-
 flink-end-to-end-tests/flink-quickstart-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-sql-client-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-stream-sql-test/pom.xml  | 2 +-
 flink-end-to-end-tests/flink-stream-state-ttl-test/pom.xml| 2 +-
 .../flink-stream-stateful-job-upgrade-test/pom.xml| 2 +-
 flink-end-to-end-tests/flink-streaming-file-sink-test/pom.xml | 2 +-
 flink-end-to-end-tests/pom.xml| 2 +-
 flink-examples/flink-examples-batch/pom.xml   | 2 +-
 flink-examples/flink-examples-streaming/pom.xml   | 2 +-
 flink-examples/flink-examples-table/pom.xml   | 2 +-
 flink-examples/pom.xml| 2 +-
 flink-filesystems/flink-hadoop-fs/pom.xml | 2 +-
 flink-filesystems/flink-mapr-fs/pom.xml   | 2 +-
 flink-filesystems/flink-s3-fs-hadoop/pom.xml  | 2 +-
 flink-filesystems/flink-s3-fs-presto/pom.xml  | 2 +-
 flink-filesystems/flink-swift-fs-hadoop/pom.xml   | 2 +-
 flink-filesystems/pom.xml 

[flink] branch release-1.5 updated: [FLINK-10070][build] Downgrade git-commit-id-plugin

2018-08-06 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.5
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.5 by this push:
 new 2b1a062  [FLINK-10070][build] Downgrade git-commit-id-plugin
2b1a062 is described below

commit 2b1a062a754bd5fa7bb6386a086e1f2d3ec2ae81
Author: zentol 
AuthorDate: Mon Aug 6 13:47:44 2018 +0200

[FLINK-10070][build] Downgrade git-commit-id-plugin
---
 flink-dist/pom.xml|  1 -
 flink-runtime/pom.xml |  4 
 pom.xml   | 14 ++
 3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/flink-dist/pom.xml b/flink-dist/pom.xml
index 1118d43..a562b4e 100644
--- a/flink-dist/pom.xml
+++ b/flink-dist/pom.xml
@@ -589,7 +589,6 @@ under the License.

pl.project13.maven
git-commit-id-plugin
-   2.1.5



diff --git a/flink-runtime/pom.xml b/flink-runtime/pom.xml
index 02135f2..cb9e995 100644
--- a/flink-runtime/pom.xml
+++ b/flink-runtime/pom.xml
@@ -465,7 +465,6 @@ under the License.
Used to show the git ref when starting 
the jobManager. -->
pl.project13.maven
git-commit-id-plugin
-   2.1.14



@@ -479,9 +478,6 @@ under the License.
false

false

src/main/resources/.version.properties
-   
-   
git.commit.*
-   


true
diff --git a/pom.xml b/pom.xml
index 6521a2b..01f4d82 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1450,6 +1450,20 @@ under the License.
3.0.0

 
+   
+   pl.project13.maven
+   
git-commit-id-plugin
+
+   2.1.10
+   
+   
+   
git.build.*
+   
git.branch.*
+   
git.remote.*
+   
+   
+   
+


org.eclipse.m2e



[flink] branch release-1.6 updated: [FLINK-10070][build] Downgrade git-commit-id-plugin

2018-08-06 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.6 by this push:
 new 2531f6e  [FLINK-10070][build] Downgrade git-commit-id-plugin
2531f6e is described below

commit 2531f6e555c925de66fca20df0b477f178830e1c
Author: zentol 
AuthorDate: Mon Aug 6 13:47:44 2018 +0200

[FLINK-10070][build] Downgrade git-commit-id-plugin
---
 flink-dist/pom.xml|  1 -
 flink-runtime/pom.xml |  4 
 pom.xml   | 14 ++
 3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/flink-dist/pom.xml b/flink-dist/pom.xml
index 8bfbace..aa8d954 100644
--- a/flink-dist/pom.xml
+++ b/flink-dist/pom.xml
@@ -593,7 +593,6 @@ under the License.

pl.project13.maven
git-commit-id-plugin
-   2.1.5



diff --git a/flink-runtime/pom.xml b/flink-runtime/pom.xml
index 7c7b897..608c02b 100644
--- a/flink-runtime/pom.xml
+++ b/flink-runtime/pom.xml
@@ -464,7 +464,6 @@ under the License.
Used to show the git ref when starting 
the jobManager. -->
pl.project13.maven
git-commit-id-plugin
-   2.1.14



@@ -478,9 +477,6 @@ under the License.
false

false

src/main/resources/.version.properties
-   
-   
git.commit.*
-   


true
diff --git a/pom.xml b/pom.xml
index 4b251eb..afff930 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1426,6 +1426,20 @@ under the License.
3.0.0

 
+   
+   pl.project13.maven
+   
git-commit-id-plugin
+
+   2.1.10
+   
+   
+   
git.build.*
+   
git.branch.*
+   
git.remote.*
+   
+   
+   
+


org.eclipse.m2e



[flink] branch master updated: [FLINK-10070][build] Downgrade git-commit-id-plugin

2018-08-06 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f34f24b  [FLINK-10070][build] Downgrade git-commit-id-plugin
f34f24b is described below

commit f34f24ba0f0b7faf4ffbad4dd4727253328534cf
Author: zentol 
AuthorDate: Mon Aug 6 13:47:44 2018 +0200

[FLINK-10070][build] Downgrade git-commit-id-plugin
---
 flink-dist/pom.xml|  1 -
 flink-runtime/pom.xml |  4 
 pom.xml   | 14 ++
 3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/flink-dist/pom.xml b/flink-dist/pom.xml
index 96c9346..d857a30 100644
--- a/flink-dist/pom.xml
+++ b/flink-dist/pom.xml
@@ -593,7 +593,6 @@ under the License.

pl.project13.maven
git-commit-id-plugin
-   2.1.5



diff --git a/flink-runtime/pom.xml b/flink-runtime/pom.xml
index bc4a3cb..3412fd4 100644
--- a/flink-runtime/pom.xml
+++ b/flink-runtime/pom.xml
@@ -471,7 +471,6 @@ under the License.
Used to show the git ref when starting 
the jobManager. -->
pl.project13.maven
git-commit-id-plugin
-   2.1.14



@@ -485,9 +484,6 @@ under the License.
false

false

src/main/resources/.version.properties
-   
-   
git.commit.*
-   


true
diff --git a/pom.xml b/pom.xml
index 35b57da..293dd3f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1432,6 +1432,20 @@ under the License.
3.0.0

 
+   
+   pl.project13.maven
+   
git-commit-id-plugin
+
+   2.1.10
+   
+   
+   
git.build.*
+   
git.branch.*
+   
git.remote.*
+   
+   
+   
+


org.eclipse.m2e



[flink] branch master updated: [FLINK-9504][logging] Change the log-level of checkpoint duration to debug

2018-08-06 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b2a9131  [FLINK-9504][logging] Change the log-level of checkpoint 
duration to debug
b2a9131 is described below

commit b2a91310e1889116926026480c5e9a5e44c96e54
Author: minwenjun 
AuthorDate: Sat Jun 2 21:05:04 2018 +0800

[FLINK-9504][logging] Change the log-level of checkpoint duration to debug

This closes #6111.
---
 .../org/apache/flink/runtime/state/DefaultOperatorStateBackend.java | 4 ++--
 .../org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java  | 4 ++--
 .../flink/contrib/streaming/state/RocksDBKeyedStateBackend.java | 6 +++---
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java
index dfff50d..d9fc41e 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java
@@ -459,7 +459,7 @@ public class DefaultOperatorStateBackend implements 
OperatorStateBackend {
}
 
if (asynchronousSnapshots) {
-   
LOG.info("DefaultOperatorStateBackend snapshot ({}, asynchronous part) in 
thread {} took {} ms.",
+   
LOG.debug("DefaultOperatorStateBackend snapshot ({}, asynchronous part) in 
thread {} took {} ms.",
streamFactory, 
Thread.currentThread(), (System.currentTimeMillis() - asyncStartTime));
}
 
@@ -474,7 +474,7 @@ public class DefaultOperatorStateBackend implements 
OperatorStateBackend {
task.run();
}
 
-   LOG.info("DefaultOperatorStateBackend snapshot ({}, synchronous 
part) in thread {} took {} ms.",
+   LOG.debug("DefaultOperatorStateBackend snapshot ({}, 
synchronous part) in thread {} took {} ms.",
streamFactory, Thread.currentThread(), 
(System.currentTimeMillis() - syncStartTime));
 
return task;
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java
index 6d2bfef..bc1e0f5 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java
@@ -643,7 +643,7 @@ public class HeapKeyedStateBackend extends 
AbstractKeyedStateBackend {
 
@Override
public void logOperationCompleted(CheckpointStreamFactory 
streamFactory, long startTime) {
-   LOG.info("Heap backend snapshot ({}, asynchronous part) 
in thread {} took {} ms.",
+   LOG.debug("Heap backend snapshot ({}, asynchronous 
part) in thread {} took {} ms.",
streamFactory, Thread.currentThread(), 
(System.currentTimeMillis() - startTime));
}
 
@@ -838,7 +838,7 @@ public class HeapKeyedStateBackend extends 
AbstractKeyedStateBackend {
 
finalizeSnapshotBeforeReturnHook(task);
 
-   LOG.info("Heap backend snapshot (" + 
primaryStreamFactory + ", synchronous part) in thread " +
+   LOG.debug("Heap backend snapshot (" + 
primaryStreamFactory + ", synchronous part) in thread " +
Thread.currentThread() + " took " + 
(System.currentTimeMillis() - syncStartTime) + " ms.");
 
return task;
diff --git 
a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java
 
b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java
index f7af354..bd0ddf2 100644
--- 
a/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java
+++ 
b/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java
@@ -1918,14 +1918,14 @@ public class RocksDBKeyedStateBackend extends 
AbstractKeyedStateBackend {
 

snapshotOperation.writeDBSnapshot();
 
-   LOG.info("Asynchronous RocksDB 
snapshot ({}, asynchronous part) in t

[flink] branch master updated (cd880fb -> a72d91b)

2018-08-06 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from cd880fb  [FLINK-7812][metrics] Add system resources metrics
 new 78254b3  [hotfix][docs] Specify operators behaviour on processing 
watermarks
 new a72d91b  [hotfix][docs] Document event time behaviour for idle sources

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/dev/event_time.md | 28 
 1 file changed, 28 insertions(+)



[flink] 01/02: [hotfix][docs] Specify operators behaviour on processing watermarks

2018-08-06 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 78254b3eee73e8d002dfd7e9afcc92e81e79d83e
Author: Piotr Nowojski 
AuthorDate: Fri May 25 09:55:04 2018 +0200

[hotfix][docs] Specify operators behaviour on processing watermarks
---
 docs/dev/event_time.md | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/docs/dev/event_time.md b/docs/dev/event_time.md
index 886bf22..e3090ad 100644
--- a/docs/dev/event_time.md
+++ b/docs/dev/event_time.md
@@ -180,6 +180,8 @@ Once a watermark reaches an operator, the operator can 
advance its internal *eve
 
 
 
+Note that event time is inherited by a freshly created stream element (or 
elements) from either the event that produced them or
+from watermark that triggered creation of those elements.
 
 ## Watermarks in Parallel Streams
 
@@ -219,4 +221,17 @@ with late elements in event time windows.
 Please refer to the [Debugging Windows & Event Time]({{ site.baseurl 
}}/monitoring/debugging_event_time.html) section for debugging
 watermarks at runtime.
 
+## How operators are processing watermarks
+
+As a general rule, operators are required to completely process a given 
watermark before forwarding it downstream. For example,
+`WindowOperator` will first evaluate which windows should be fired, and only 
after producing all of the output triggered by
+the watermark will the watermark itself be sent downstream. In other words, 
all elements produced due to occurrence of a watermark
+will be emitted before the watermark.
+
+The same rule applies to `TwoInputStreamOperator`. However, in this case the 
current watermark of the operator is defined as
+the minimum of both of its inputs.
+
+The details of this behavior are defined by the implementations of the 
`OneInputStreamOperator#processWatermark`,
+`TwoInputStreamOperator#processWatermark1` and 
`TwoInputStreamOperator#processWatermark2` methods.
+
 {% top %}



[flink] 02/02: [hotfix][docs] Document event time behaviour for idle sources

2018-08-06 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a72d91bed91692077577b176e97a5cb753a9c109
Author: Piotr Nowojski 
AuthorDate: Tue May 29 15:18:16 2018 +0200

[hotfix][docs] Document event time behaviour for idle sources

This closes #6076.
---
 docs/dev/event_time.md | 13 +
 1 file changed, 13 insertions(+)

diff --git a/docs/dev/event_time.md b/docs/dev/event_time.md
index e3090ad..1d747aa 100644
--- a/docs/dev/event_time.md
+++ b/docs/dev/event_time.md
@@ -215,6 +215,19 @@ arrive after the system's event time clock (as signaled by 
the watermarks) has a
 timestamp. See [Allowed Lateness]({{ site.baseurl 
}}/dev/stream/operators/windows.html#allowed-lateness) for more information on 
how to work
 with late elements in event time windows.
 
+## Idling sources
+
+Currently, with pure event time watermarks generators, watermarks can not 
progress if there are no elements
+to be processed. That means in case of gap in the incoming data, event time 
will not progress and for
+example the window operator will not be triggered and thus existing windows 
will not be able to produce any
+output data.
+
+To circumvent this one can use periodic watermark assigners that don't only 
assign based on
+element timestamps. An example solution could be an assigner that switches to 
using current processing time
+as the time basis after not observing new events for a while.
+
+Sources can be marked as idle using 
`SourceFunction.SourceContext#markAsTemporarilyIdle`. For details please refer 
to the Javadoc of
+this method as well as `StreamStatus`.
 
 ## Debugging Watermarks
 



[flink] 01/02: [hotfix][metrics] Replace anonymous classes with lambdas

2018-08-06 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ed58c34e9859e735c3fb542a8ebc577655d9aa46
Author: Piotr Nowojski 
AuthorDate: Tue Oct 10 17:24:21 2017 +0200

[hotfix][metrics] Replace anonymous classes with lambdas
---
 .../flink/runtime/metrics/util/MetricUtils.java| 115 -
 1 file changed, 20 insertions(+), 95 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/MetricUtils.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/MetricUtils.java
index 3fd268a..367979e 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/MetricUtils.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/MetricUtils.java
@@ -21,6 +21,7 @@ package org.apache.flink.runtime.metrics.util;
 import org.apache.flink.metrics.Gauge;
 import org.apache.flink.metrics.MetricGroup;
 import org.apache.flink.runtime.io.network.NetworkEnvironment;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
 import org.apache.flink.runtime.metrics.MetricRegistry;
 import org.apache.flink.runtime.metrics.groups.JobManagerMetricGroup;
 import org.apache.flink.runtime.metrics.groups.TaskManagerMetricGroup;
@@ -41,7 +42,7 @@ import javax.management.ReflectionException;
 import java.lang.management.ClassLoadingMXBean;
 import java.lang.management.GarbageCollectorMXBean;
 import java.lang.management.ManagementFactory;
-import java.lang.management.MemoryMXBean;
+import java.lang.management.MemoryUsage;
 import java.lang.management.ThreadMXBean;
 import java.util.List;
 
@@ -105,37 +106,16 @@ public class MetricUtils {
private static void instantiateNetworkMetrics(
MetricGroup metrics,
final NetworkEnvironment network) {
-   metrics.>gauge("TotalMemorySegments", new 
Gauge () {
-   @Override
-   public Long getValue() {
-   return (long) 
network.getNetworkBufferPool().getTotalNumberOfMemorySegments();
-   }
-   });
 
-   metrics.>gauge("AvailableMemorySegments", new 
Gauge () {
-   @Override
-   public Long getValue() {
-   return (long) 
network.getNetworkBufferPool().getNumberOfAvailableMemorySegments();
-   }
-   });
+   final NetworkBufferPool networkBufferPool = 
network.getNetworkBufferPool();
+   metrics.>gauge("TotalMemorySegments", 
networkBufferPool::getTotalNumberOfMemorySegments);
+   metrics.>gauge("AvailableMemorySegments", 
networkBufferPool::getNumberOfAvailableMemorySegments);
}
 
private static void instantiateClassLoaderMetrics(MetricGroup metrics) {
final ClassLoadingMXBean mxBean = 
ManagementFactory.getClassLoadingMXBean();
-
-   metrics.>gauge("ClassesLoaded", new 
Gauge () {
-   @Override
-   public Long getValue() {
-   return mxBean.getTotalLoadedClassCount();
-   }
-   });
-
-   metrics.>gauge("ClassesUnloaded", new 
Gauge () {
-   @Override
-   public Long getValue() {
-   return mxBean.getUnloadedClassCount();
-   }
-   });
+   metrics.>gauge("ClassesLoaded", 
mxBean::getTotalLoadedClassCount);
+   metrics.>gauge("ClassesUnloaded", 
mxBean::getUnloadedClassCount);
}
 
private static void instantiateGarbageCollectorMetrics(MetricGroup 
metrics) {
@@ -144,66 +124,26 @@ public class MetricUtils {
for (final GarbageCollectorMXBean garbageCollector: 
garbageCollectors) {
MetricGroup gcGroup = 
metrics.addGroup(garbageCollector.getName());
 
-   gcGroup.>gauge("Count", new 
Gauge () {
-   @Override
-   public Long getValue() {
-   return 
garbageCollector.getCollectionCount();
-   }
-   });
-
-   gcGroup.>gauge("Time", new 
Gauge () {
-   @Override
-   public Long getValue() {
-   return 
garbageCollector.getCollectionTime();
-   }
-   });
+   gcGroup.>gauge("Count", 
garbageCollector::getCollectionCount);
+   gcGroup.>gauge("Time", 
garbageCollector::getCollectionTime);
}
}
 
private static void instantiateMemoryMetrics(MetricGroup metrics) {

[flink] branch master updated (68337d0 -> cd880fb)

2018-08-06 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 68337d0  [FLINK-9562][optimizer] Fix typos/comments
 new ed58c34  [hotfix][metrics] Replace anonymous classes with lambdas
 new cd880fb  [FLINK-7812][metrics] Add system resources metrics

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/_includes/generated/metric_configuration.html |  10 +
 docs/monitoring/metrics.md | 144 +
 .../flink/configuration/ConfigurationUtils.java|  19 ++
 .../apache/flink/configuration/MetricOptions.java  |  14 ++
 flink-runtime/pom.xml  |   7 +
 .../runtime/entrypoint/ClusterEntrypoint.java  |   5 +-
 .../flink/runtime/metrics/util/MetricUtils.java| 131 +++-
 .../metrics/util/SystemResourcesCounter.java   | 236 +
 .../util/SystemResourcesMetricsInitializer.java| 101 +
 .../flink/runtime/minicluster/MiniCluster.java |   6 +-
 .../runtime/taskexecutor/TaskManagerRunner.java|   3 +-
 .../TaskManagerServicesConfiguration.java  |  19 +-
 .../flink/runtime/jobmanager/JobManager.scala  |   3 +-
 .../minicluster/LocalFlinkMiniCluster.scala|   5 +-
 .../flink/runtime/taskmanager/TaskManager.scala|   3 +-
 .../runtime/metrics/TaskManagerMetricsTest.java|   3 +-
 .../metrics/utils/SystemResourcesCounterTest.java  |  71 +++
 .../taskexecutor/NetworkBufferCalculationTest.java |   4 +-
 flink-tests/pom.xml|   6 +
 .../metrics/SystemResourcesMetricsITCase.java  | 142 +
 pom.xml|   8 +-
 21 files changed, 831 insertions(+), 109 deletions(-)
 create mode 100644 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/SystemResourcesCounter.java
 create mode 100644 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/util/SystemResourcesMetricsInitializer.java
 create mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/utils/SystemResourcesCounterTest.java
 create mode 100644 
flink-tests/src/test/java/org/apache/flink/runtime/metrics/SystemResourcesMetricsITCase.java



[flink] 02/02: [FLINK-7812][metrics] Add system resources metrics

2018-08-06 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit cd880fb69e51dd37f58614355be8a1a65d44540f
Author: Piotr Nowojski 
AuthorDate: Wed Oct 11 09:10:49 2017 +0200

[FLINK-7812][metrics] Add system resources metrics

This closes #4801.
---
 docs/_includes/generated/metric_configuration.html |  10 +
 docs/monitoring/metrics.md | 144 +
 .../flink/configuration/ConfigurationUtils.java|  19 ++
 .../apache/flink/configuration/MetricOptions.java  |  14 ++
 flink-runtime/pom.xml  |   7 +
 .../runtime/entrypoint/ClusterEntrypoint.java  |   5 +-
 .../flink/runtime/metrics/util/MetricUtils.java|  16 +-
 .../metrics/util/SystemResourcesCounter.java   | 236 +
 .../util/SystemResourcesMetricsInitializer.java| 101 +
 .../flink/runtime/minicluster/MiniCluster.java |   6 +-
 .../runtime/taskexecutor/TaskManagerRunner.java|   3 +-
 .../TaskManagerServicesConfiguration.java  |  19 +-
 .../flink/runtime/jobmanager/JobManager.scala  |   3 +-
 .../minicluster/LocalFlinkMiniCluster.scala|   5 +-
 .../flink/runtime/taskmanager/TaskManager.scala|   3 +-
 .../runtime/metrics/TaskManagerMetricsTest.java|   3 +-
 .../metrics/utils/SystemResourcesCounterTest.java  |  71 +++
 .../taskexecutor/NetworkBufferCalculationTest.java |   4 +-
 flink-tests/pom.xml|   6 +
 .../metrics/SystemResourcesMetricsITCase.java  | 142 +
 pom.xml|   8 +-
 21 files changed, 811 insertions(+), 14 deletions(-)

diff --git a/docs/_includes/generated/metric_configuration.html 
b/docs/_includes/generated/metric_configuration.html
index aef8fbb..98054e9 100644
--- a/docs/_includes/generated/metric_configuration.html
+++ b/docs/_includes/generated/metric_configuration.html
@@ -67,5 +67,15 @@
 ".taskmanager.."
 Defines the scope format string that is applied to all metrics 
scoped to a job on a TaskManager.
 
+
+metrics.system-resource
+false
+
+
+
+metrics.system-resource-probing-interval
+5000
+
+
 
 
diff --git a/docs/monitoring/metrics.md b/docs/monitoring/metrics.md
index 55f626e..554e1c5 100644
--- a/docs/monitoring/metrics.md
+++ b/docs/monitoring/metrics.md
@@ -1396,6 +1396,150 @@ Thus, in order to infer the metric identifier:
   
 
 
+### System resources
+
+System resources reporting is disabled by default. When 
`metrics.system-resource`
+is enabled additional metrics listed below will be available on Job- and 
TaskManager.
+System resources metrics are updated periodically and they present average 
values for a
+configured interval (`metrics.system-resource-probing-interval`).
+
+System resources reporting requires an optional dependency to be present on the
+classpath (for example placed in Flink's `lib` directory):
+
+  - `com.github.oshi:oshi-core:3.4.0` (licensed under EPL 1.0 license)
+
+Including it's transitive dependencies:
+
+  - `net.java.dev.jna:jna-platform:jar:4.2.2`
+  - `net.java.dev.jna:jna:jar:4.2.2`
+
+Failures in this regard will be reported as warning messages like 
`NoClassDefFoundError`
+logged by `SystemResourcesMetricsInitializer` during the startup.
+
+ System CPU
+
+
+  
+
+  Scope
+  Infix
+  Metrics
+  Description
+
+  
+  
+
+  Job-/TaskManager
+  System.CPU
+  Usage
+  Overall % of CPU usage on the machine.
+
+
+  Idle
+  % of CPU Idle usage on the machine.
+
+
+  Sys
+  % of System CPU usage on the machine.
+
+
+  User
+  % of User CPU usage on the machine.
+
+
+  IOWait
+  % of IOWait CPU usage on the machine.
+
+
+  Irq
+  % of Irq CPU usage on the machine.
+
+
+  SoftIrq
+  % of SoftIrq CPU usage on the machine.
+
+
+  Nice
+  % of Nice Idle usage on the machine.
+
+
+  Load1min
+  Average CPU load over 1 minute
+
+
+  Load5min
+  Average CPU load over 5 minute
+
+
+  Load15min
+  Average CPU load over 15 minute
+
+
+  UsageCPU*
+  % of CPU usage per each processor
+
+  
+
+
+ System memory
+
+
+  
+
+  Scope
+  Infix
+  Metrics
+  Description
+
+  
+  
+
+  Job-/TaskManager
+  System.Memory
+  Available
+  Available memory in bytes
+
+
+  Total
+  Total memory in bytes
+
+
+  System.Swap
+  Used
+  Used swap bytes
+
+
+  Total
+  Total swap in bytes
+
+  
+
+
+ System network
+
+
+  
+
+  Scope
+  Infix
+  Metrics
+  Description
+
+

[flink] branch master updated: [FLINK-9562][optimizer] Fix typos/comments

2018-08-06 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 68337d0  [FLINK-9562][optimizer] Fix typos/comments
68337d0 is described below

commit 68337d04d4ab6a57cc0d91d8c740e8e5d07eeb1e
Author: Alex Arkhipov 
AuthorDate: Mon Aug 6 14:05:49 2018 +0300

[FLINK-9562][optimizer] Fix typos/comments
---
 .../src/main/java/org/apache/flink/optimizer/costs/Costs.java| 2 +-
 .../main/java/org/apache/flink/optimizer/dag/DagConnection.java  | 9 ++---
 .../java/org/apache/flink/optimizer/dag/SortPartitionNode.java   | 2 +-
 .../main/java/org/apache/flink/optimizer/dag/TwoInputNode.java   | 2 +-
 4 files changed, 9 insertions(+), 6 deletions(-)

diff --git 
a/flink-optimizer/src/main/java/org/apache/flink/optimizer/costs/Costs.java 
b/flink-optimizer/src/main/java/org/apache/flink/optimizer/costs/Costs.java
index 7c854bf..3b8f6f7 100644
--- a/flink-optimizer/src/main/java/org/apache/flink/optimizer/costs/Costs.java
+++ b/flink-optimizer/src/main/java/org/apache/flink/optimizer/costs/Costs.java
@@ -427,7 +427,7 @@ public class Costs implements Comparable, Cloneable {
return 1;
}

-   // next, check the disk cost. again, if we have actual costs on 
both, use them, otherwise use the heuristic costs.
+   // next, check the CPU cost. again, if we have actual costs on 
both, use them, otherwise use the heuristic costs.
if (this.cpuCost != UNKNOWN && o.cpuCost != UNKNOWN) {
return this.cpuCost < o.cpuCost ? -1 : this.cpuCost > 
o.cpuCost ? 1 : 0;
} else if (this.heuristicCpuCost < o.heuristicCpuCost) {
diff --git 
a/flink-optimizer/src/main/java/org/apache/flink/optimizer/dag/DagConnection.java
 
b/flink-optimizer/src/main/java/org/apache/flink/optimizer/dag/DagConnection.java
index 1f98a11..bd88234 100644
--- 
a/flink-optimizer/src/main/java/org/apache/flink/optimizer/dag/DagConnection.java
+++ 
b/flink-optimizer/src/main/java/org/apache/flink/optimizer/dag/DagConnection.java
@@ -54,13 +54,14 @@ public class DagConnection implements EstimateProvider, 
DumpableConnectionNONE.
-* The temp mode is by default NONE.
+* Creates a new Connection between two nodes. The shipping strategy is 
by default null.
 * 
 * @param source
 *The source node.
 * @param target
 *The target node.
+* @param exchangeMode
+*The data exchange mode (pipelined / batch / batch only for 
shuffles / ... )
 */
public DagConnection(OptimizerNode source, OptimizerNode target, 
ExecutionMode exchangeMode) {
this(source, target, null, exchangeMode);
@@ -96,10 +97,12 @@ public class DagConnection implements EstimateProvider, 
DumpableConnection

[flink] branch release-1.6 updated: [FLINK-10051][tests][sql] Add missing dependencies for sql client E2E test

2018-08-06 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.6 by this push:
 new 18a5a43  [FLINK-10051][tests][sql] Add missing dependencies for sql 
client E2E test
18a5a43 is described below

commit 18a5a439312deb46c306ff44e9be2106cae06b8f
Author: Chesnay 
AuthorDate: Mon Aug 6 10:56:35 2018 +0200

[FLINK-10051][tests][sql] Add missing dependencies for sql client E2E test
---
 .../flink-sql-client-test/pom.xml  | 63 +-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/flink-end-to-end-tests/flink-sql-client-test/pom.xml 
b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
index 17a144f..c64fe28 100644
--- a/flink-end-to-end-tests/flink-sql-client-test/pom.xml
+++ b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
@@ -42,8 +42,67 @@ under the License.
scala-compiler
provided

+
+   
+   
+   
+   org.apache.flink
+   flink-avro
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   flink-json
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   
flink-connector-kafka-0.9_${scala.binary.version}
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   
flink-connector-kafka-0.10_${scala.binary.version}
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   
flink-connector-kafka-0.11_${scala.binary.version}
+   ${project.version}
+   sql-jar
+   provided
+   

 
+   
+   
+   
+   
+   org.apache.kafka
+   kafka-clients
+   0.11.0.2
+   
+   
+   
+



@@ -76,7 +135,9 @@ under the License.



${project.build.directory}/sql-jars
-   
+   



org.apache.flink



[flink] branch master updated: [FLINK-10051][tests][sql] Add missing dependencies for sql client E2E test

2018-08-06 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 75f39ce  [FLINK-10051][tests][sql] Add missing dependencies for sql 
client E2E test
75f39ce is described below

commit 75f39ce8b5c792fdeadef9fc117c04ab17660d40
Author: Chesnay 
AuthorDate: Mon Aug 6 10:56:35 2018 +0200

[FLINK-10051][tests][sql] Add missing dependencies for sql client E2E test
---
 .../flink-sql-client-test/pom.xml  | 63 +-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/flink-end-to-end-tests/flink-sql-client-test/pom.xml 
b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
index 46bc2ba..ec5a0e1 100644
--- a/flink-end-to-end-tests/flink-sql-client-test/pom.xml
+++ b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
@@ -42,8 +42,67 @@ under the License.
scala-compiler
provided

+
+   
+   
+   
+   org.apache.flink
+   flink-avro
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   flink-json
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   
flink-connector-kafka-0.9_${scala.binary.version}
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   
flink-connector-kafka-0.10_${scala.binary.version}
+   ${project.version}
+   sql-jar
+   provided
+   
+   
+   
+   org.apache.flink
+   
flink-connector-kafka-0.11_${scala.binary.version}
+   ${project.version}
+   sql-jar
+   provided
+   

 
+   
+   
+   
+   
+   org.apache.kafka
+   kafka-clients
+   0.11.0.2
+   
+   
+   
+



@@ -76,7 +135,9 @@ under the License.



${project.build.directory}/sql-jars
-   
+   



org.apache.flink



[flink] branch release-1.6 updated: [FLINK-10064] [table] Fix a typo in ExternalCatalogTable

2018-08-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.6 by this push:
 new 8c9b59e  [FLINK-10064] [table] Fix a typo in ExternalCatalogTable
8c9b59e is described below

commit 8c9b59e79d35842a771214302dad75c1e99da682
Author: jerryjzhang 
AuthorDate: Mon Aug 6 01:01:12 2018 +0800

[FLINK-10064] [table] Fix a typo in ExternalCatalogTable

This closes #6497.
---
 .../scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
index 79da852..9576f34 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
@@ -88,7 +88,7 @@ class ExternalCatalogTable(
 * Returns whether this external table is declared as table sink.
 */
   def isTableSink: Boolean = {
-isSource
+isSink
   }
 
   /**



[flink] branch master updated: [FLINK-10064] [table] Fix a typo in ExternalCatalogTable

2018-08-06 Thread twalthr
This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4b3dfb5  [FLINK-10064] [table] Fix a typo in ExternalCatalogTable
4b3dfb5 is described below

commit 4b3dfb571e3b64b8fe340b29aa0d9edf1ce3fef5
Author: jerryjzhang 
AuthorDate: Mon Aug 6 01:01:12 2018 +0800

[FLINK-10064] [table] Fix a typo in ExternalCatalogTable

This closes #6497.
---
 .../scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
index 79da852..9576f34 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogTable.scala
@@ -88,7 +88,7 @@ class ExternalCatalogTable(
 * Returns whether this external table is declared as table sink.
 */
   def isTableSink: Boolean = {
-isSource
+isSink
   }
 
   /**