Build failed in Jenkins: flink-snapshot-deployment-1.7 #291

2019-08-26 Thread Apache Jenkins Server
See 


--
[...truncated 501.30 KB...]
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing 

 with 

[INFO] Replacing original test artifact with shaded test artifact.
[INFO] Replacing 

 with 

[INFO] Dependency-reduced POM written at: 

[INFO] 
[INFO] >>> maven-source-plugin:2.2.1:jar (attach-sources) > generate-sources @ 
flink-scala_2.11 >>>
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (validate) @ flink-scala_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
flink-scala_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven) @ 
flink-scala_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ 
flink-scala_2.11 ---
[INFO] 
[INFO] --- directory-maven-plugin:0.1:highest-basedir (directories) @ 
flink-scala_2.11 ---
[INFO] Highest basedir set to: 

[INFO] 
[INFO] --- build-helper-maven-plugin:1.7:add-source (add-source) @ 
flink-scala_2.11 ---
[INFO] Source directory: 

 added.
[INFO] 
[INFO] <<< maven-source-plugin:2.2.1:jar (attach-sources) < generate-sources @ 
flink-scala_2.11 <<<
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar (attach-sources) @ flink-scala_2.11 ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-javadoc-plugin:2.9.1:jar (attach-javadocs) @ flink-scala_2.11 
---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (integration-tests) @ 
flink-scala_2.11 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalastyle-maven-plugin:1.0.0:check (default) @ flink-scala_2.11 ---
Saving to 
outputFile=
Processed 103 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1843 ms
[INFO] 
[INFO] --- japicmp-maven-plugin:0.11.0:cmp (default) @ flink-scala_2.11 ---
[INFO] Downloading: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-scala_2.11/1.6.2/flink-scala_2.11-1.6.2.jar
[INFO] Downloaded: 
https://repo.maven.apache.org/maven2/org/apache/flink/flink-scala_2.11/1.6.2/flink-scala_2.11-1.6.2.jar
 (775 KB at 12288.7 KB/sec)
[INFO] Written file 
'
[INFO] Written file 
'
[INFO] Written file 
'
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
flink-scala_2.11 ---
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT.jar
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT.pom
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT-tests.jar
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/flink/flink-scala_2.11/1.7-SNAPSHOT/flink-scala_2.11-1.7-SNAPSHOT-tests.jar
[INFO] Installing 

 to 

buildbot failure in on flink-docs-master

2019-08-26 Thread buildbot
The Buildbot has detected a new failure on builder flink-docs-master while 
building . Full details are available at:
https://ci.apache.org/builders/flink-docs-master/builds/1577

Buildbot URL: https://ci.apache.org/

Buildslave for this Build: bb_slave2_ubuntu

Build Reason: The Nightly scheduler named 'flink-nightly-docs-master' triggered 
this build
Build Source Stamp: [branch master] HEAD
Blamelist: 

BUILD FAILED: failed Build docs

Sincerely,
 -The Buildbot





[flink] 03/06: [hotfix][table api] Fix logger arguments in CatalogManager

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ad6c46eb1321dfc4319d012f1a116ead85f3b25a
Author: Jeff Zhang 
AuthorDate: Fri Aug 9 14:42:58 2019 +0800

[hotfix][table api] Fix logger arguments in CatalogManager

This closes #9401
---
 .../src/main/java/org/apache/flink/table/catalog/CatalogManager.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
index 5933487..aef90cc 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
@@ -294,8 +294,8 @@ public class CatalogManager {
 
LOG.info(
"Set the current default database as [{}] in 
the current default catalog [{}].",
-   currentCatalogName,
-   currentDatabaseName);
+   currentDatabaseName,
+   currentCatalogName);
}
}
 



[flink] 05/06: [hotfix][docs] Add documentation regarding path style access for s3

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4fc755e9f0f17ac7bdab2f645cfc24f319a5b298
Author: Achyuth Samudrala 
AuthorDate: Mon Aug 19 08:29:13 2019 +0200

[hotfix][docs] Add documentation regarding path style access for s3

This closes #9479

[ci skip]
---
 docs/ops/filesystems/s3.md| 8 
 docs/ops/filesystems/s3.zh.md | 8 
 2 files changed, 16 insertions(+)

diff --git a/docs/ops/filesystems/s3.md b/docs/ops/filesystems/s3.md
index f601b46..9794723 100644
--- a/docs/ops/filesystems/s3.md
+++ b/docs/ops/filesystems/s3.md
@@ -109,6 +109,14 @@ To do so, configure your endpoint in `flink-conf.yaml`.
 s3.endpoint: your-endpoint-hostname
 {% endhighlight %}
 
+## Configure Path Style Access
+
+Some of the S3 compliant object stores might not have virtual host style 
addressing enabled by default. In such cases, you will have to provide the 
property to enable path style access in  in `flink-conf.yaml`.
+
+{% highlight yaml %}
+s3.path.style.access: true
+{% endhighlight %}
+
 ## Entropy injection for S3 file systems
 
 The bundled S3 file systems (`flink-s3-fs-presto` and `flink-s3-fs-hadoop`) 
support entropy injection. Entropy injection is
diff --git a/docs/ops/filesystems/s3.zh.md b/docs/ops/filesystems/s3.zh.md
index f601b46..9794723 100644
--- a/docs/ops/filesystems/s3.zh.md
+++ b/docs/ops/filesystems/s3.zh.md
@@ -109,6 +109,14 @@ To do so, configure your endpoint in `flink-conf.yaml`.
 s3.endpoint: your-endpoint-hostname
 {% endhighlight %}
 
+## Configure Path Style Access
+
+Some of the S3 compliant object stores might not have virtual host style 
addressing enabled by default. In such cases, you will have to provide the 
property to enable path style access in  in `flink-conf.yaml`.
+
+{% highlight yaml %}
+s3.path.style.access: true
+{% endhighlight %}
+
 ## Entropy injection for S3 file systems
 
 The bundled S3 file systems (`flink-s3-fs-presto` and `flink-s3-fs-hadoop`) 
support entropy injection. Entropy injection is



[flink] 01/06: [hotfix][docs] Correct method name in KeyedStateReaderFunction example

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ccb36ee312f2c13618b74c54024a25cf91edf36c
Author: David Anderson 
AuthorDate: Fri Aug 23 10:29:23 2019 +0200

[hotfix][docs] Correct method name in KeyedStateReaderFunction example

This closes #9520

[ci skip]
---
 docs/dev/libs/state_processor_api.md| 2 +-
 docs/dev/libs/state_processor_api.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/dev/libs/state_processor_api.md 
b/docs/dev/libs/state_processor_api.md
index 676ac49..75a6f12 100644
--- a/docs/dev/libs/state_processor_api.md
+++ b/docs/dev/libs/state_processor_api.md
@@ -290,7 +290,7 @@ class ReaderFunction extends 
KeyedStateReaderFunction {
   }
  
   @Override
-  public void processKey(
+  public void readKey(
 Integer key,
 Context ctx,
 Collector out) throws Exception {
diff --git a/docs/dev/libs/state_processor_api.zh.md 
b/docs/dev/libs/state_processor_api.zh.md
index 676ac49..75a6f12 100644
--- a/docs/dev/libs/state_processor_api.zh.md
+++ b/docs/dev/libs/state_processor_api.zh.md
@@ -290,7 +290,7 @@ class ReaderFunction extends 
KeyedStateReaderFunction {
   }
  
   @Override
-  public void processKey(
+  public void readKey(
 Integer key,
 Context ctx,
 Collector out) throws Exception {



[flink] 02/06: [hotfix][JavaDocs] Correct comment in KeyedStream

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 680c87cc557e2b71b654a79a3dd2050e41df7691
Author: stayhsfLee 
AuthorDate: Thu Aug 8 21:42:46 2019 +0800

[hotfix][JavaDocs] Correct comment in KeyedStream

This closes #9395

[ci skip]
---
 .../java/org/apache/flink/streaming/api/datastream/KeyedStream.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
index 84df716..8c7937d 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
@@ -793,7 +793,7 @@ public class KeyedStream extends DataStream {
 * per key.
 *
 * @param positionToMax
-*The field position in the data points to minimize. This 
is applicable to
+*The field position in the data points to maximize. This 
is applicable to
 *Tuple types, Scala case classes, and primitive types 
(which is considered
 *as having one field).
 * @return The transformed DataStream.



[flink] 06/06: [hotfix][docs] Update local setup tutorials to fit new log messages

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 7f8933f516ec507b03549c0108559a6070dad030
Author: tison 
AuthorDate: Tue Aug 20 09:59:06 2019 +0800

[hotfix][docs] Update local setup tutorials to fit new log messages

This closes #9488

[ci skip]
---
 docs/getting-started/tutorials/local_setup.md| 2 +-
 docs/getting-started/tutorials/local_setup.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/getting-started/tutorials/local_setup.md 
b/docs/getting-started/tutorials/local_setup.md
index 799d390..a3331f2 100644
--- a/docs/getting-started/tutorials/local_setup.md
+++ b/docs/getting-started/tutorials/local_setup.md
@@ -111,7 +111,7 @@ INFO ... - ResourceManager 
akka.tcp://flink@localhost:6123/user/resourcemanager
 INFO ... - Starting the SlotManager.
 INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was 
granted leadership ...
 INFO ... - Recovering all persisted jobs.
-INFO ... - Registering TaskManager ... under ... at the SlotManager.
+INFO ... - Registering TaskManager ... at ResourceManager
 {% endhighlight %}
 
 ## Read the Code
diff --git a/docs/getting-started/tutorials/local_setup.zh.md 
b/docs/getting-started/tutorials/local_setup.zh.md
index e8ef56c..dba36df 100644
--- a/docs/getting-started/tutorials/local_setup.zh.md
+++ b/docs/getting-started/tutorials/local_setup.zh.md
@@ -111,7 +111,7 @@ INFO ... - ResourceManager 
akka.tcp://flink@localhost:6123/user/resourcemanager
 INFO ... - Starting the SlotManager.
 INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was 
granted leadership ...
 INFO ... - Recovering all persisted jobs.
-INFO ... - Registering TaskManager ... under ... at the SlotManager.
+INFO ... - Registering TaskManager ... at ResourceManager
 {% endhighlight %}
 
 ## Read the Code



[flink] branch release-1.9 updated (2391a88 -> 7f8933f)

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a change to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2391a88  [FLINK-13362][docs] Add DDL documentation for Kafka, 
ElasticSearch, FileSystem and formats
 new ccb36ee  [hotfix][docs] Correct method name in 
KeyedStateReaderFunction example
 new 680c87c  [hotfix][JavaDocs] Correct comment in KeyedStream
 new ad6c46e  [hotfix][table api] Fix logger arguments in CatalogManager
 new 9ba0a89  [FLINK-13728][docs] Fix wrong closing tag order in sidenav
 new 4fc755e  [hotfix][docs] Add documentation regarding path style access 
for s3
 new 7f8933f  [hotfix][docs] Update local setup tutorials to fit new log 
messages

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/_includes/sidenav.html   | 2 +-
 docs/dev/libs/state_processor_api.md  | 2 +-
 docs/dev/libs/state_processor_api.zh.md   | 2 +-
 docs/getting-started/tutorials/local_setup.md | 2 +-
 docs/getting-started/tutorials/local_setup.zh.md  | 2 +-
 docs/ops/filesystems/s3.md| 8 
 docs/ops/filesystems/s3.zh.md | 8 
 .../org/apache/flink/streaming/api/datastream/KeyedStream.java| 2 +-
 .../main/java/org/apache/flink/table/catalog/CatalogManager.java  | 4 ++--
 9 files changed, 24 insertions(+), 8 deletions(-)



[flink] 04/06: [FLINK-13728][docs] Fix wrong closing tag order in sidenav

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9ba0a8906e24fa864a89df65edbc95c25ec3f6dd
Author: Nico Kruber 
AuthorDate: Wed Aug 14 15:59:50 2019 +0200

[FLINK-13728][docs] Fix wrong closing tag order in sidenav

This closes #9439

[ci skip]
---
 docs/_includes/sidenav.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/_includes/sidenav.html b/docs/_includes/sidenav.html
index 73edab1..cc787d9 100644
--- a/docs/_includes/sidenav.html
+++ b/docs/_includes/sidenav.html
@@ -88,7 +88,7 @@ level is determined by 'nav-pos'.
 {% else %}
   {% assign elementsPos = elementsPosStack | last %}
   {% assign pos = posStack | last %}
-
+
   {% assign elementsPosStack = elementsPosStack | pop %}
   {% assign posStack = posStack | pop %}
 {% endif %}



[flink] 02/03: [FLINK-13791][docs] Speed up sidenav by using group_by

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit c64e167b8003b7379545c1b83e54d9491164b7a8
Author: Nico Kruber 
AuthorDate: Mon Aug 19 23:48:57 2019 +0200

[FLINK-13791][docs] Speed up sidenav by using group_by

_includes/sidenav.html parses through pages_by_language over and over again
trying to find children when building the (recursive) side navigation. By 
doing
this once with a group_by, we can gain considerable savings in building the
docs via `./build_docs.sh` without any change to the generated HTML pages:

This closes #9487
---
 docs/_includes/sidenav.html | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/docs/_includes/sidenav.html b/docs/_includes/sidenav.html
index 1073d99..70a24a7 100644
--- a/docs/_includes/sidenav.html
+++ b/docs/_includes/sidenav.html
@@ -69,7 +69,9 @@ level is determined by 'nav-pos'.
 {%- assign posStack = site.array -%}
 
 {%- assign elements = site.array -%}
-{%- assign children = (site.pages_by_language[page.language] | where: 
"nav-parent_id" , "root" | sort: "nav-pos") -%}
+{%- assign all_pages_by_nav_parent = (site.pages_by_language[page.language] | 
where_exp: "item", "item.nav-parent_id != nil" | group_by: "nav-parent_id") -%}
+{%- assign children = (all_pages_by_nav_parent | where: "name" , "root") -%}
+{%- assign children = (children[0].items | sort: "nav-pos") -%}
 {%- if children.size > 0 -%}
   {%- assign elements = elements | push: children -%}
 {%- endif -%}
@@ -111,8 +113,9 @@ level is determined by 'nav-pos'.
 
 {%- assign pos = pos | plus: 1 -%}
 {%- if this.nav-id -%}
-  {%- assign children = (site.pages_by_language[page.language] | where: 
"nav-parent_id" , this.nav-id | sort: "nav-pos") -%}
+  {%- assign children = (all_pages_by_nav_parent | where: "name" , 
this.nav-id) -%}
   {%- if children.size > 0 -%}
+{%- assign children = (children[0].items | sort: "nav-pos") -%}
 {%- capture collapse_target -%}"#collapse-{{ i }}" 
data-toggle="collapse"{%- if active -%} class="active"{%- endif -%}{%- 
endcapture -%}
 {%- capture expand -%}{%- unless active -%} {%- endunless 
-%}{%- endcapture %}
 {{ title }}{{ expand }}



[flink] 01/03: [hotfix][docs] Add documentation regarding path style access for s3

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a61e383310f59436eeed909e3f8c4621fb5772c3
Author: Achyuth Samudrala 
AuthorDate: Mon Aug 19 08:29:13 2019 +0200

[hotfix][docs] Add documentation regarding path style access for s3

This closes #9479
---
 docs/ops/filesystems/s3.md| 8 
 docs/ops/filesystems/s3.zh.md | 8 
 2 files changed, 16 insertions(+)

diff --git a/docs/ops/filesystems/s3.md b/docs/ops/filesystems/s3.md
index f601b46..9794723 100644
--- a/docs/ops/filesystems/s3.md
+++ b/docs/ops/filesystems/s3.md
@@ -109,6 +109,14 @@ To do so, configure your endpoint in `flink-conf.yaml`.
 s3.endpoint: your-endpoint-hostname
 {% endhighlight %}
 
+## Configure Path Style Access
+
+Some of the S3 compliant object stores might not have virtual host style 
addressing enabled by default. In such cases, you will have to provide the 
property to enable path style access in  in `flink-conf.yaml`.
+
+{% highlight yaml %}
+s3.path.style.access: true
+{% endhighlight %}
+
 ## Entropy injection for S3 file systems
 
 The bundled S3 file systems (`flink-s3-fs-presto` and `flink-s3-fs-hadoop`) 
support entropy injection. Entropy injection is
diff --git a/docs/ops/filesystems/s3.zh.md b/docs/ops/filesystems/s3.zh.md
index f601b46..9794723 100644
--- a/docs/ops/filesystems/s3.zh.md
+++ b/docs/ops/filesystems/s3.zh.md
@@ -109,6 +109,14 @@ To do so, configure your endpoint in `flink-conf.yaml`.
 s3.endpoint: your-endpoint-hostname
 {% endhighlight %}
 
+## Configure Path Style Access
+
+Some of the S3 compliant object stores might not have virtual host style 
addressing enabled by default. In such cases, you will have to provide the 
property to enable path style access in  in `flink-conf.yaml`.
+
+{% highlight yaml %}
+s3.path.style.access: true
+{% endhighlight %}
+
 ## Entropy injection for S3 file systems
 
 The bundled S3 file systems (`flink-s3-fs-presto` and `flink-s3-fs-hadoop`) 
support entropy injection. Entropy injection is



[flink] 03/03: [hotfix][docs] Update local setup tutorials to fit new log messages

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8498124d98386d9c58af40b6eae2878082d3dd09
Author: tison 
AuthorDate: Tue Aug 20 09:59:06 2019 +0800

[hotfix][docs] Update local setup tutorials to fit new log messages

This closes #9488

[ci skip]
---
 docs/getting-started/tutorials/local_setup.md| 2 +-
 docs/getting-started/tutorials/local_setup.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/getting-started/tutorials/local_setup.md 
b/docs/getting-started/tutorials/local_setup.md
index ea0a89e..6d1d09b 100644
--- a/docs/getting-started/tutorials/local_setup.md
+++ b/docs/getting-started/tutorials/local_setup.md
@@ -111,7 +111,7 @@ INFO ... - ResourceManager 
akka.tcp://flink@localhost:6123/user/resourcemanager
 INFO ... - Starting the SlotManager.
 INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was 
granted leadership ...
 INFO ... - Recovering all persisted jobs.
-INFO ... - Registering TaskManager ... under ... at the SlotManager.
+INFO ... - Registering TaskManager ... at ResourceManager
 {% endhighlight %}
 
 ## Read the Code
diff --git a/docs/getting-started/tutorials/local_setup.zh.md 
b/docs/getting-started/tutorials/local_setup.zh.md
index 9e98cae..d6566e4 100644
--- a/docs/getting-started/tutorials/local_setup.zh.md
+++ b/docs/getting-started/tutorials/local_setup.zh.md
@@ -111,7 +111,7 @@ INFO ... - ResourceManager 
akka.tcp://flink@localhost:6123/user/resourcemanager
 INFO ... - Starting the SlotManager.
 INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was 
granted leadership ...
 INFO ... - Recovering all persisted jobs.
-INFO ... - Registering TaskManager ... under ... at the SlotManager.
+INFO ... - Registering TaskManager ... at ResourceManager
 {% endhighlight %}
 
 ## Read the Code



[flink] branch master updated (b820606 -> 8498124)

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b820606  [FLINK-13362][docs] Add DDL documentation for Kafka, 
ElasticSearch, FileSystem and formats
 new a61e383  [hotfix][docs] Add documentation regarding path style access 
for s3
 new c64e167  [FLINK-13791][docs] Speed up sidenav by using group_by
 new 8498124  [hotfix][docs] Update local setup tutorials to fit new log 
messages

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/_includes/sidenav.html  | 7 +--
 docs/getting-started/tutorials/local_setup.md| 2 +-
 docs/getting-started/tutorials/local_setup.zh.md | 2 +-
 docs/ops/filesystems/s3.md   | 8 
 docs/ops/filesystems/s3.zh.md| 8 
 5 files changed, 23 insertions(+), 4 deletions(-)



[flink] 02/02: [FLINK-13362][docs] Add DDL documentation for Kafka, ElasticSearch, FileSystem and formats

2019-08-26 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2391a8830a9944adbc5aadbb24b43ad19340d90c
Author: Jark Wu 
AuthorDate: Mon Aug 26 13:33:19 2019 +0800

[FLINK-13362][docs] Add DDL documentation for Kafka, ElasticSearch, 
FileSystem and formats
---
 docs/dev/table/connect.md| 270 +
 docs/dev/table/connect.zh.md | 283 +--
 2 files changed, 546 insertions(+), 7 deletions(-)

diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md
index 14d2a3b..5378ae9 100644
--- a/docs/dev/table/connect.md
+++ b/docs/dev/table/connect.md
@@ -122,6 +122,12 @@ format: ...
 schema: ...
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+tableEnvironment.sqlUpdate("CREATE TABLE MyTable (...) WITH (...)")
+{% endhighlight %}
+
 
 
 The table's type (`source`, `sink`, or `both`) determines how a table is 
registered. In case of table type `both`, both a table source and table sink 
are registered under the same name. Logically, this means that we can both read 
and write to such a table similarly to a table in a regular DBMS.
@@ -276,6 +282,39 @@ tables:
 type: VARCHAR
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  `user` BIGINT,
+  message VARCHAR,
+  ts VARCHAR
+) WITH (
+  -- declare the external system to connect to
+  'connector.type' = 'kafka',
+  'connector.version' = '0.10',
+  'connector.topic' = 'topic_name',
+  'connector.startup-mode' = 'earliest-offset',
+  'connector.properties.0.key' = 'zookeeper.connect',
+  'connector.properties.0.value' = 'localhost:2181',
+  'connector.properties.1.key' = 'bootstrap.servers',
+  'connector.properties.1.value' = 'localhost:9092',
+  'update-mode' = 'append',
+  -- declare a format for this system
+  'format.type' = 'avro',
+  'format.avro-schema' = '{
+"namespace": "org.myorganization",
+"type": "record",
+"name": "UserMessage",
+"fields": [
+{"name": "ts", "type": "string"},
+{"name": "user", "type": "long"},
+{"name": "message", "type": ["string", "null"]}
+]
+ }'
+)
+{% endhighlight %}
+
 
 
 In both ways the desired connection properties are converted into normalized, 
string-based key-value pairs. So-called [table 
factories](sourceSinks.html#define-a-tablefactory) create configured table 
sources, table sinks, and corresponding formats from the key-value pairs. All 
table factories that can be found via Java's [Service Provider Interfaces 
(SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken 
into account when searching for exactly-one matching table factory.
@@ -603,6 +642,16 @@ tables:
 update-mode: append# otherwise: "retract" or "upsert"
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyTable (
+ ...
+) WITH (
+ 'update-mode' = 'append'  -- otherwise: 'retract' or 'upsert'
+)
+{% endhighlight %}
+
 
 
 See also the [general streaming concepts 
documentation](streaming/dynamic_tables.html#continuous-queries) for more 
information.
@@ -652,6 +701,17 @@ connector:
   path: "file:///path/to/whatever"# required: path to a file or directory
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  ...
+) WITH (
+  'connector.type' = 'filesystem',   -- required: specify to 
connector type
+  'connector.path' = 'file:///path/to/whatever'  -- required: path to a file 
or directory
+)
+{% endhighlight %}
+
 
 
 The file system connector itself is included in Flink and does not require an 
additional dependency. A corresponding format needs to be specified for reading 
and writing rows from and to a file system.
@@ -753,6 +813,49 @@ connector:
   sink-partitioner-class: org.mycompany.MyPartitioner  # optional: used in 
case of sink partitioner custom
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  ...
+) WITH (
+  'connector.type' = 'kafka',   
+
+  'connector.version' = '0.11', -- required: valid connector versions are
+-- "0.8", "0.9", "0.10", "0.11", and 
"universal"
+
+  'connector.topic' = 'topic_name', -- required: topic name from which the 
table is read
+
+  'update-mode' = 'append', -- required: update mode when used as 
table sink, 
+-- only support append mode now.
+
+  'connector.properties.0.key' = 'zookeeper.connect', -- optional: connector 
specific properties
+  'connector.properties.0.value' = 'localhost:2181',
+  'connector.properties.1.key' = 'bootstrap.servers',
+  'connector.properties.1.value' = 'localhost:9092',
+  'connector.properties.2.key' = 

[flink] branch release-1.9 updated (f2ab7df -> 2391a88)

2019-08-26 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git.


from f2ab7df  [hotfix] Fix the base url of release 1.9 docs and mark 1.9 as 
stable
 new cd3db4c  [FLINK-13359][docs] Add documentation for DDL introduction
 new 2391a88  [FLINK-13362][docs] Add DDL documentation for Kafka, 
ElasticSearch, FileSystem and formats

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/dev/table/connect.md| 270 +
 docs/dev/table/connect.zh.md | 283 +--
 docs/dev/table/sql.md| 163 +
 docs/dev/table/sql.zh.md | 232 ++-
 4 files changed, 889 insertions(+), 59 deletions(-)



[flink] 01/02: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-26 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit cd3db4cbf5d5c0874465becbef862b84a86b6f5b
Author: yuzhao.cyz 
AuthorDate: Tue Aug 6 12:53:35 2019 +0800

[FLINK-13359][docs] Add documentation for DDL introduction
---
 docs/dev/table/sql.md| 163 -
 docs/dev/table/sql.zh.md | 232 +--
 2 files changed, 343 insertions(+), 52 deletions(-)

diff --git a/docs/dev/table/sql.md b/docs/dev/table/sql.md
index e607716..79f0b41 100644
--- a/docs/dev/table/sql.md
+++ b/docs/dev/table/sql.md
@@ -22,19 +22,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+This is a complete list of Data Definition Language (DDL) and Data 
Manipulation Language (DML) constructs supported in Flink.
+* This will be replaced by the TOC
+{:toc} 
+
+## Query
 SQL queries are specified with the `sqlQuery()` method of the 
`TableEnvironment`. The method returns the result of the SQL query as a 
`Table`. A `Table` can be used in [subsequent SQL and Table API 
queries](common.html#mixing-table-api-and-sql), be [converted into a DataSet or 
DataStream](common.html#integration-with-datastream-and-dataset-api), or 
[written to a TableSink](common.html#emit-a-table)). SQL and Table API queries 
can be seamlessly mixed and are holistically optimized and tra [...]
 
-In order to access a table in a SQL query, it must be [registered in the 
TableEnvironment](common.html#register-tables-in-the-catalog). A table can be 
registered from a [TableSource](common.html#register-a-tablesource), 
[Table](common.html#register-a-table), [DataStream, or 
DataSet](common.html#register-a-datastream-or-dataset-as-table). Alternatively, 
users can also [register external catalogs in a 
TableEnvironment](common.html#register-an-external-catalog) to specify the 
location of th [...]
+In order to access a table in a SQL query, it must be [registered in the 
TableEnvironment](common.html#register-tables-in-the-catalog). A table can be 
registered from a [TableSource](common.html#register-a-tablesource), 
[Table](common.html#register-a-table), [CREATE TABLE statement](#create-table), 
[DataStream, or 
DataSet](common.html#register-a-datastream-or-dataset-as-table). Alternatively, 
users can also [register external catalogs in a 
TableEnvironment](common.html#register-an-extern [...]
 
 For convenience `Table.toString()` automatically registers the table under a 
unique name in its `TableEnvironment` and returns the name. Hence, `Table` 
objects can be directly inlined into SQL queries (by string concatenation) as 
shown in the examples below.
 
 **Note:** Flink's SQL support is not yet feature complete. Queries that 
include unsupported SQL features cause a `TableException`. The supported 
features of SQL on batch and streaming tables are listed in the following 
sections.
 
-* This will be replaced by the TOC
-{:toc}
-
-Specifying a Query
---
+### Specifying a Query
 
 The following examples show how to specify a SQL queries on registered and 
inlined tables.
 
@@ -130,8 +131,7 @@ table_env \
 
 {% top %}
 
-Supported Syntax
-
+### Supported Syntax
 
 Flink parses SQL using [Apache 
Calcite](https://calcite.apache.org/docs/reference.html), which supports 
standard ANSI SQL. DDL statements are not supported by Flink.
 
@@ -276,10 +276,9 @@ String literals must be enclosed in single quotes (e.g., 
`SELECT 'Hello World'`)
 
 {% top %}
 
-Operations
-
+### Operations
 
-### Show and Use
+ Show and Use
 
 
 
@@ -330,7 +329,7 @@ USE mydatabase;
 
 
 
-### Scan, Projection, and Filter
+ Scan, Projection, and Filter
 
 
 
@@ -385,7 +384,7 @@ SELECT PRETTY_PRINT(user) FROM Orders
 
 {% top %}
 
-### Aggregations
+ Aggregations
 
 
 
@@ -509,7 +508,7 @@ GROUP BY users
 
 {% top %}
 
-### Joins
+ Joins
 
 
 
@@ -655,7 +654,7 @@ WHERE
 
 {% top %}
 
-### Set Operations
+ Set Operations
 
 
 
@@ -765,7 +764,7 @@ WHERE product EXISTS (
 
 {% top %}
 
-### OrderBy & Limit
+ OrderBy & Limit
 
 
 
@@ -813,7 +812,7 @@ LIMIT 3
 
 {% top %}
 
-### Insert
+ Insert
 
 
 
@@ -846,7 +845,7 @@ FROM Orders
 
 {% top %}
 
-### Group Windows
+ Group Windows
 
 Group windows are defined in the `GROUP BY` clause of a SQL query. Just like 
queries with regular `GROUP BY` clauses, queries with a `GROUP BY` clause that 
includes a group window function compute a single result row per group. The 
following group windows functions are supported for SQL on batch and streaming 
tables.
 
@@ -874,13 +873,13 @@ Group windows are defined in the `GROUP BY` clause of a 
SQL query. Just like que
   
 
 
- Time Attributes
+# Time Attributes
 
 For SQL queries on streaming tables, the `time_attr` argument of the group 
window function must refer to a valid time attribute that 

[flink] 02/02: [FLINK-13362][docs] Add DDL documentation for Kafka, ElasticSearch, FileSystem and formats

2019-08-26 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b820606916421f40ffa700c9a403d0782f618320
Author: Jark Wu 
AuthorDate: Mon Aug 26 13:33:19 2019 +0800

[FLINK-13362][docs] Add DDL documentation for Kafka, ElasticSearch, 
FileSystem and formats
---
 docs/dev/table/connect.md| 270 +
 docs/dev/table/connect.zh.md | 283 +--
 2 files changed, 546 insertions(+), 7 deletions(-)

diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md
index 14d2a3b..5378ae9 100644
--- a/docs/dev/table/connect.md
+++ b/docs/dev/table/connect.md
@@ -122,6 +122,12 @@ format: ...
 schema: ...
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+tableEnvironment.sqlUpdate("CREATE TABLE MyTable (...) WITH (...)")
+{% endhighlight %}
+
 
 
 The table's type (`source`, `sink`, or `both`) determines how a table is 
registered. In case of table type `both`, both a table source and table sink 
are registered under the same name. Logically, this means that we can both read 
and write to such a table similarly to a table in a regular DBMS.
@@ -276,6 +282,39 @@ tables:
 type: VARCHAR
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  `user` BIGINT,
+  message VARCHAR,
+  ts VARCHAR
+) WITH (
+  -- declare the external system to connect to
+  'connector.type' = 'kafka',
+  'connector.version' = '0.10',
+  'connector.topic' = 'topic_name',
+  'connector.startup-mode' = 'earliest-offset',
+  'connector.properties.0.key' = 'zookeeper.connect',
+  'connector.properties.0.value' = 'localhost:2181',
+  'connector.properties.1.key' = 'bootstrap.servers',
+  'connector.properties.1.value' = 'localhost:9092',
+  'update-mode' = 'append',
+  -- declare a format for this system
+  'format.type' = 'avro',
+  'format.avro-schema' = '{
+"namespace": "org.myorganization",
+"type": "record",
+"name": "UserMessage",
+"fields": [
+{"name": "ts", "type": "string"},
+{"name": "user", "type": "long"},
+{"name": "message", "type": ["string", "null"]}
+]
+ }'
+)
+{% endhighlight %}
+
 
 
 In both ways the desired connection properties are converted into normalized, 
string-based key-value pairs. So-called [table 
factories](sourceSinks.html#define-a-tablefactory) create configured table 
sources, table sinks, and corresponding formats from the key-value pairs. All 
table factories that can be found via Java's [Service Provider Interfaces 
(SPI)](https://docs.oracle.com/javase/tutorial/sound/SPI-intro.html) are taken 
into account when searching for exactly-one matching table factory.
@@ -603,6 +642,16 @@ tables:
 update-mode: append# otherwise: "retract" or "upsert"
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyTable (
+ ...
+) WITH (
+ 'update-mode' = 'append'  -- otherwise: 'retract' or 'upsert'
+)
+{% endhighlight %}
+
 
 
 See also the [general streaming concepts 
documentation](streaming/dynamic_tables.html#continuous-queries) for more 
information.
@@ -652,6 +701,17 @@ connector:
   path: "file:///path/to/whatever"# required: path to a file or directory
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  ...
+) WITH (
+  'connector.type' = 'filesystem',   -- required: specify to 
connector type
+  'connector.path' = 'file:///path/to/whatever'  -- required: path to a file 
or directory
+)
+{% endhighlight %}
+
 
 
 The file system connector itself is included in Flink and does not require an 
additional dependency. A corresponding format needs to be specified for reading 
and writing rows from and to a file system.
@@ -753,6 +813,49 @@ connector:
   sink-partitioner-class: org.mycompany.MyPartitioner  # optional: used in 
case of sink partitioner custom
 {% endhighlight %}
 
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  ...
+) WITH (
+  'connector.type' = 'kafka',   
+
+  'connector.version' = '0.11', -- required: valid connector versions are
+-- "0.8", "0.9", "0.10", "0.11", and 
"universal"
+
+  'connector.topic' = 'topic_name', -- required: topic name from which the 
table is read
+
+  'update-mode' = 'append', -- required: update mode when used as 
table sink, 
+-- only support append mode now.
+
+  'connector.properties.0.key' = 'zookeeper.connect', -- optional: connector 
specific properties
+  'connector.properties.0.value' = 'localhost:2181',
+  'connector.properties.1.key' = 'bootstrap.servers',
+  'connector.properties.1.value' = 'localhost:9092',
+  'connector.properties.2.key' = 

[flink] 01/02: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-26 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 00b6e8bd3ae3943c24e0538debcae82df35dac4d
Author: yuzhao.cyz 
AuthorDate: Tue Aug 6 12:53:35 2019 +0800

[FLINK-13359][docs] Add documentation for DDL introduction

This closes #9366
---
 docs/dev/table/sql.md| 163 ---
 docs/dev/table/sql.zh.md | 177 ---
 2 files changed, 290 insertions(+), 50 deletions(-)

diff --git a/docs/dev/table/sql.md b/docs/dev/table/sql.md
index e607716..79f0b41 100644
--- a/docs/dev/table/sql.md
+++ b/docs/dev/table/sql.md
@@ -22,19 +22,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+This is a complete list of Data Definition Language (DDL) and Data 
Manipulation Language (DML) constructs supported in Flink.
+* This will be replaced by the TOC
+{:toc} 
+
+## Query
 SQL queries are specified with the `sqlQuery()` method of the 
`TableEnvironment`. The method returns the result of the SQL query as a 
`Table`. A `Table` can be used in [subsequent SQL and Table API 
queries](common.html#mixing-table-api-and-sql), be [converted into a DataSet or 
DataStream](common.html#integration-with-datastream-and-dataset-api), or 
[written to a TableSink](common.html#emit-a-table)). SQL and Table API queries 
can be seamlessly mixed and are holistically optimized and tra [...]
 
-In order to access a table in a SQL query, it must be [registered in the 
TableEnvironment](common.html#register-tables-in-the-catalog). A table can be 
registered from a [TableSource](common.html#register-a-tablesource), 
[Table](common.html#register-a-table), [DataStream, or 
DataSet](common.html#register-a-datastream-or-dataset-as-table). Alternatively, 
users can also [register external catalogs in a 
TableEnvironment](common.html#register-an-external-catalog) to specify the 
location of th [...]
+In order to access a table in a SQL query, it must be [registered in the 
TableEnvironment](common.html#register-tables-in-the-catalog). A table can be 
registered from a [TableSource](common.html#register-a-tablesource), 
[Table](common.html#register-a-table), [CREATE TABLE statement](#create-table), 
[DataStream, or 
DataSet](common.html#register-a-datastream-or-dataset-as-table). Alternatively, 
users can also [register external catalogs in a 
TableEnvironment](common.html#register-an-extern [...]
 
 For convenience `Table.toString()` automatically registers the table under a 
unique name in its `TableEnvironment` and returns the name. Hence, `Table` 
objects can be directly inlined into SQL queries (by string concatenation) as 
shown in the examples below.
 
 **Note:** Flink's SQL support is not yet feature complete. Queries that 
include unsupported SQL features cause a `TableException`. The supported 
features of SQL on batch and streaming tables are listed in the following 
sections.
 
-* This will be replaced by the TOC
-{:toc}
-
-Specifying a Query
---
+### Specifying a Query
 
 The following examples show how to specify a SQL queries on registered and 
inlined tables.
 
@@ -130,8 +131,7 @@ table_env \
 
 {% top %}
 
-Supported Syntax
-
+### Supported Syntax
 
 Flink parses SQL using [Apache 
Calcite](https://calcite.apache.org/docs/reference.html), which supports 
standard ANSI SQL. DDL statements are not supported by Flink.
 
@@ -276,10 +276,9 @@ String literals must be enclosed in single quotes (e.g., 
`SELECT 'Hello World'`)
 
 {% top %}
 
-Operations
-
+### Operations
 
-### Show and Use
+ Show and Use
 
 
 
@@ -330,7 +329,7 @@ USE mydatabase;
 
 
 
-### Scan, Projection, and Filter
+ Scan, Projection, and Filter
 
 
 
@@ -385,7 +384,7 @@ SELECT PRETTY_PRINT(user) FROM Orders
 
 {% top %}
 
-### Aggregations
+ Aggregations
 
 
 
@@ -509,7 +508,7 @@ GROUP BY users
 
 {% top %}
 
-### Joins
+ Joins
 
 
 
@@ -655,7 +654,7 @@ WHERE
 
 {% top %}
 
-### Set Operations
+ Set Operations
 
 
 
@@ -765,7 +764,7 @@ WHERE product EXISTS (
 
 {% top %}
 
-### OrderBy & Limit
+ OrderBy & Limit
 
 
 
@@ -813,7 +812,7 @@ LIMIT 3
 
 {% top %}
 
-### Insert
+ Insert
 
 
 
@@ -846,7 +845,7 @@ FROM Orders
 
 {% top %}
 
-### Group Windows
+ Group Windows
 
 Group windows are defined in the `GROUP BY` clause of a SQL query. Just like 
queries with regular `GROUP BY` clauses, queries with a `GROUP BY` clause that 
includes a group window function compute a single result row per group. The 
following group windows functions are supported for SQL on batch and streaming 
tables.
 
@@ -874,13 +873,13 @@ Group windows are defined in the `GROUP BY` clause of a 
SQL query. Just like que
   
 
 
- Time Attributes
+# Time Attributes
 
 For SQL queries on streaming tables, the `time_attr` argument of the group 
window function must refer to 

[flink] branch master updated (ac1b8db -> b820606)

2019-08-26 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from ac1b8db  [FLINK-13726][docs] Build docs with jekyll 4.0.0.pre.beta1
 new 00b6e8b  [FLINK-13359][docs] Add documentation for DDL introduction
 new b820606  [FLINK-13362][docs] Add DDL documentation for Kafka, 
ElasticSearch, FileSystem and formats

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/dev/table/connect.md| 270 +
 docs/dev/table/connect.zh.md | 283 +--
 docs/dev/table/sql.md| 163 +
 docs/dev/table/sql.zh.md | 177 +++
 4 files changed, 836 insertions(+), 57 deletions(-)



[flink] 07/10: [FLINK-13729][docs] Update website generation dependencies

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ef74a61f54f190926a8388f46db7919e0e94420b
Author: Nico Kruber 
AuthorDate: Wed Aug 14 16:57:14 2019 +0200

[FLINK-13729][docs] Update website generation dependencies

This seems to come with a much nicer code highlighting.

This closes #9442
---
 docs/Gemfile  |  8 +++
 docs/Gemfile.lock | 62 ---
 2 files changed, 36 insertions(+), 34 deletions(-)

diff --git a/docs/Gemfile b/docs/Gemfile
index b519eb9..eb307fd 100644
--- a/docs/Gemfile
+++ b/docs/Gemfile
@@ -21,10 +21,10 @@ source 'https://rubygems.org'
 ruby '>= 2.1.0'
 
 gem 'jekyll', '3.7.2'
-gem 'addressable', '2.4.0'
-gem 'octokit', '~> 4.3.0'
-gem 'therubyracer', '0.12.2'
-gem 'json', '2.0.4'
+gem 'addressable', '2.6.0'
+gem 'octokit', '4.14.0'
+gem 'therubyracer', '0.12.3'
+gem 'json', '2.2.0'
 gem 'jekyll-multiple-languages', '2.0.3'
 gem 'jekyll-paginate', '1.1.0'
 gem 'liquid-c', '4.0.0' # speed-up site generation
diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock
index 09b02e8..68e66d3 100644
--- a/docs/Gemfile.lock
+++ b/docs/Gemfile.lock
@@ -1,22 +1,23 @@
 GEM
   remote: https://rubygems.org/
   specs:
-addressable (2.4.0)
+addressable (2.6.0)
+  public_suffix (>= 2.0.2, < 4.0)
 colorator (1.1.0)
-concurrent-ruby (1.0.5)
+concurrent-ruby (1.1.5)
 em-websocket (0.5.1)
   eventmachine (>= 0.12.9)
   http_parser.rb (~> 0.6.0)
-eventmachine (1.2.5)
-faraday (0.9.2)
+eventmachine (1.2.7)
+faraday (0.15.4)
   multipart-post (>= 1.2, < 3)
-ffi (1.9.18)
+ffi (1.11.1)
 forwardable-extended (2.6.0)
 hawkins (2.0.5)
   em-websocket (~> 0.5)
   jekyll (~> 3.1)
 http_parser.rb (0.6.0)
-i18n (0.9.3)
+i18n (0.9.5)
   concurrent-ruby (~> 1.0)
 jekyll (3.7.2)
   addressable (~> 2.4)
@@ -33,14 +34,14 @@ GEM
   safe_yaml (~> 1.0)
 jekyll-multiple-languages (2.0.3)
 jekyll-paginate (1.1.0)
-jekyll-sass-converter (1.5.1)
+jekyll-sass-converter (1.5.2)
   sass (~> 3.4)
-jekyll-watch (2.0.0)
+jekyll-watch (2.2.1)
   listen (~> 3.0)
-json (2.0.4)
-kramdown (1.16.2)
+json (2.2.0)
+kramdown (1.17.0)
 libv8 (3.16.14.19)
-liquid (4.0.0)
+liquid (4.0.3)
 liquid-c (4.0.0)
   liquid (>= 3.0.0)
 listen (3.1.5)
@@ -48,43 +49,44 @@ GEM
   rb-inotify (~> 0.9, >= 0.9.7)
   ruby_dep (~> 1.2)
 mercenary (0.3.6)
-multipart-post (2.0.0)
-octokit (4.3.0)
-  sawyer (~> 0.7.0, >= 0.5.3)
-pathutil (0.16.1)
+multipart-post (2.1.1)
+octokit (4.14.0)
+  sawyer (~> 0.8.0, >= 0.5.3)
+pathutil (0.16.2)
   forwardable-extended (~> 2.6)
-rb-fsevent (0.10.2)
-rb-inotify (0.9.10)
-  ffi (>= 0.5.0, < 2)
+public_suffix (3.1.1)
+rb-fsevent (0.10.3)
+rb-inotify (0.10.0)
+  ffi (~> 1.0)
 ref (2.0.0)
-rouge (3.1.1)
+rouge (3.8.0)
 ruby_dep (1.5.0)
-safe_yaml (1.0.4)
-sass (3.5.5)
+safe_yaml (1.0.5)
+sass (3.7.4)
   sass-listen (~> 4.0.0)
 sass-listen (4.0.0)
   rb-fsevent (~> 0.9, >= 0.9.4)
   rb-inotify (~> 0.9, >= 0.9.7)
-sawyer (0.7.0)
-  addressable (>= 2.3.5, < 2.5)
-  faraday (~> 0.8, < 0.10)
-therubyracer (0.12.2)
-  libv8 (~> 3.16.14.0)
+sawyer (0.8.2)
+  addressable (>= 2.3.5)
+  faraday (> 0.8, < 2.0)
+therubyracer (0.12.3)
+  libv8 (~> 3.16.14.15)
   ref
 
 PLATFORMS
   ruby
 
 DEPENDENCIES
-  addressable (= 2.4.0)
+  addressable (= 2.6.0)
   hawkins
   jekyll (= 3.7.2)
   jekyll-multiple-languages (= 2.0.3)
   jekyll-paginate (= 1.1.0)
-  json (= 2.0.4)
+  json (= 2.2.0)
   liquid-c (= 4.0.0)
-  octokit (~> 4.3.0)
-  therubyracer (= 0.12.2)
+  octokit (= 4.14.0)
+  therubyracer (= 0.12.3)
 
 RUBY VERSION
ruby 2.3.1p112



[flink] 06/10: [FLINK-13723][docs] Use liquid-c for faster doc generation

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ac375e4f94c0d4def84a4016bf9055c6a9f7314c
Author: Nico Kruber 
AuthorDate: Wed Aug 14 15:20:28 2019 +0200

[FLINK-13723][docs] Use liquid-c for faster doc generation

Jekyll requires liquid and only optionally uses liquid-c if available. The
latter uses natively-compiled code and reduces generation time by ~5% for 
me.

This closes #9441
---
 docs/Gemfile  | 1 +
 docs/Gemfile.lock | 5 -
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/docs/Gemfile b/docs/Gemfile
index 1ddc989..b519eb9 100644
--- a/docs/Gemfile
+++ b/docs/Gemfile
@@ -27,6 +27,7 @@ gem 'therubyracer', '0.12.2'
 gem 'json', '2.0.4'
 gem 'jekyll-multiple-languages', '2.0.3'
 gem 'jekyll-paginate', '1.1.0'
+gem 'liquid-c', '4.0.0' # speed-up site generation
 
 group :jekyll_plugins do
   gem 'hawkins'
diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock
index fa5d20b..09b02e8 100644
--- a/docs/Gemfile.lock
+++ b/docs/Gemfile.lock
@@ -41,6 +41,8 @@ GEM
 kramdown (1.16.2)
 libv8 (3.16.14.19)
 liquid (4.0.0)
+liquid-c (4.0.0)
+  liquid (>= 3.0.0)
 listen (3.1.5)
   rb-fsevent (~> 0.9, >= 0.9.4)
   rb-inotify (~> 0.9, >= 0.9.7)
@@ -80,6 +82,7 @@ DEPENDENCIES
   jekyll-multiple-languages (= 2.0.3)
   jekyll-paginate (= 1.1.0)
   json (= 2.0.4)
+  liquid-c (= 4.0.0)
   octokit (~> 4.3.0)
   therubyracer (= 0.12.2)
 
@@ -87,4 +90,4 @@ RUBY VERSION
ruby 2.3.1p112
 
 BUNDLED WITH
-   1.16.1
+   1.17.2



[flink] 05/10: [FLINK-13724][docs] Remove unnecessary whitespace from the generated pages

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e670293f90f70a5b2b72b33b48b08e414ef3fd5d
Author: Nico Kruber 
AuthorDate: Wed Aug 14 16:18:06 2019 +0200

[FLINK-13724][docs] Remove unnecessary whitespace from the generated pages

Starting command tags with "{%-" will drop all whitespace to the left and 
ending
with "-%}" will drop all whitespace to the right (including newlines!).
Code like the following would otherwise create quite some unnecessary
whitespace:

  {% if parent_id %}
{% assign parent_id = current[0].nav-parent_id %}
  {% else %}
{% break %}
  {% endif %}

This closes #9440
---
 docs/_includes/sidenav.html | 182 ++--
 docs/_layouts/base.html |   6 +-
 docs/_layouts/plain.html|  50 ++--
 3 files changed, 119 insertions(+), 119 deletions(-)

diff --git a/docs/_includes/sidenav.html b/docs/_includes/sidenav.html
index cc787d9..1073d99 100644
--- a/docs/_includes/sidenav.html
+++ b/docs/_includes/sidenav.html
@@ -17,36 +17,36 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-{% comment %}
+{%- comment -%}
 ==
 Extract the active nav IDs.
 ==
-{% endcomment %}
-
-{% assign active_nav_ids = site.array %}
-{% assign parent_id = page.nav-parent_id %}
-
-{% for i in (1..10) %}
-  {% if parent_id %}
-{% assign active_nav_ids = active_nav_ids | push: parent_id %}
-{% assign current = (site.pages_by_language[page.language] | where: 
"nav-id" , parent_id | sort: "nav-pos") %}
-{% if current.size > 0 %}
-  {% assign parent_id = current[0].nav-parent_id %}
-{% else %}
-  {% break %}
-{% endif %}
-  {% else %}
-{% break %}
-  {% endif %}
-{% endfor %}
-
-{% if page.language == "en" %}
-  {% capture baseurl_i18n %}{{ site.baseurl }}{% endcapture %}
-{% else if page.language == "zh" %}
-  {% capture baseurl_i18n %}{{ site.baseurl }}/{{ page.language }}{% 
endcapture %}
-{% endif %}
-
-{% comment %}
+{%- endcomment -%}
+
+{%- assign active_nav_ids = site.array -%}
+{%- assign parent_id = page.nav-parent_id -%}
+
+{%- for i in (1..10) -%}
+  {%- if parent_id -%}
+{%- assign active_nav_ids = active_nav_ids | push: parent_id -%}
+{%- assign current = (site.pages_by_language[page.language] | where: 
"nav-id" , parent_id | sort: "nav-pos") -%}
+{%- if current.size > 0 -%}
+  {%- assign parent_id = current[0].nav-parent_id -%}
+{%- else -%}
+  {%- break -%}
+{%- endif -%}
+  {%- else -%}
+{%- break -%}
+  {%- endif -%}
+{%- endfor -%}
+
+{%- if page.language == "en" -%}
+  {%- capture baseurl_i18n -%}{{ site.baseurl }}{%- endcapture -%}
+{%- else if page.language == "zh" -%}
+  {%- capture baseurl_i18n -%}{{ site.baseurl }}/{{ page.language }}{%- 
endcapture -%}
+{%- endif -%}
+
+{%- comment -%}
 ==
 Build the nested list from nav-id and nav-parent_id relations.
 ==
@@ -63,77 +63,77 @@ Level 0 is made up of all pages, which have nav-parent_id 
set to 'root'.
 The 'title' of the page is used as the default link text. You can
 override this via 'nav-title'. The relative position per navigational
 level is determined by 'nav-pos'.
-{% endcomment %}
+{%- endcomment -%}
 
-{% assign elementsPosStack = site.array %}
-{% assign posStack = site.array %}
+{%- assign elementsPosStack = site.array -%}
+{%- assign posStack = site.array -%}
 
-{% assign elements = site.array %}
-{% assign children = (site.pages_by_language[page.language] | where: 
"nav-parent_id" , "root" | sort: "nav-pos") %}
-{% if children.size > 0 %}
-  {% assign elements = elements | push: children %}
-{% endif %}
+{%- assign elements = site.array -%}
+{%- assign children = (site.pages_by_language[page.language] | where: 
"nav-parent_id" , "root" | sort: "nav-pos") -%}
+{%- if children.size > 0 -%}
+  {%- assign elements = elements | push: children -%}
+{%- endif -%}
 
-{% assign elementsPos = 0 %}
-{% assign pos = 0 %}
+{%- assign elementsPos = 0 -%}
+{%- assign pos = 0 -%}
 
 
v{{ 
site.version_title }}
 
 
-{% for i in (1..1) %}
-  {% if pos >= elements[elementsPos].size %}
-{% if elementsPos == 0 %}
-  {% break %}
-{% else %}
-  {% assign elementsPos = elementsPosStack | last %}
-  {% assign pos = posStack | last %}
+{%- for i in (1..1) -%}
+  {%- if pos >= elements[elementsPos].size -%}
+{%- if elementsPos == 0 -%}
+  {%- break -%}
+{%- else -%}
+  {%- assign elementsPos = elementsPosStack | last -%}
+  {%- assign pos = posStack | last %}
 
-  {% assign 

[flink] branch master updated (13b1b40 -> ac1b8db)

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 13b1b40  [hotfix][travis] Remove duplicate mvn logging options for mvn 
verify
 new a55dc42  [hotfix][docs] Correct method name in 
KeyedStateReaderFunction example
 new 2c6441b  [hotfix][JavaDocs] Correct comment in KeyedStream
 new 9bda229  [hotfix][table api] Fix logger arguments in CatalogManager
 new b1c2e21  [FLINK-13728][docs] Fix wrong closing tag order in sidenav
 new e670293  [FLINK-13724][docs] Remove unnecessary whitespace from the 
generated pages
 new ac375e4  [FLINK-13723][docs] Use liquid-c for faster doc generation
 new ef74a61  [FLINK-13729][docs] Update website generation dependencies
 new 065de4b  [FLINK-13725][docs] use sassc for faster doc generation
 new f802e16  [hotfix][docs] Temporarily disable liveserve
 new ac1b8db  [FLINK-13726][docs] Build docs with jekyll 4.0.0.pre.beta1

The 10 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/.gitignore|   3 +-
 docs/Gemfile   |  20 ++-
 docs/Gemfile.lock  |  91 +-
 docs/README.md |   3 +-
 docs/_includes/sidenav.html| 184 ++---
 docs/_layouts/base.html|   6 +-
 docs/_layouts/plain.html   |  50 +++---
 docs/build_docs.sh |   2 +-
 docs/dev/libs/state_processor_api.md   |   2 +-
 docs/dev/libs/state_processor_api.zh.md|   2 +-
 .../streaming/api/datastream/KeyedStream.java  |   2 +-
 .../apache/flink/table/catalog/CatalogManager.java |   4 +-
 12 files changed, 190 insertions(+), 179 deletions(-)



[flink] 04/10: [FLINK-13728][docs] Fix wrong closing tag order in sidenav

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit b1c2e213302cd68758761c60a1ccff85c5c67203
Author: Nico Kruber 
AuthorDate: Wed Aug 14 15:59:50 2019 +0200

[FLINK-13728][docs] Fix wrong closing tag order in sidenav

This closes #9439
---
 docs/_includes/sidenav.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/_includes/sidenav.html b/docs/_includes/sidenav.html
index 73edab1..cc787d9 100644
--- a/docs/_includes/sidenav.html
+++ b/docs/_includes/sidenav.html
@@ -88,7 +88,7 @@ level is determined by 'nav-pos'.
 {% else %}
   {% assign elementsPos = elementsPosStack | last %}
   {% assign pos = posStack | last %}
-
+
   {% assign elementsPosStack = elementsPosStack | pop %}
   {% assign posStack = posStack | pop %}
 {% endif %}



[flink] 02/10: [hotfix][JavaDocs] Correct comment in KeyedStream

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 2c6441bbbe67f063b8d5c202b56e78083cb40eee
Author: stayhsfLee 
AuthorDate: Thu Aug 8 21:42:46 2019 +0800

[hotfix][JavaDocs] Correct comment in KeyedStream

This closes #9395
---
 .../java/org/apache/flink/streaming/api/datastream/KeyedStream.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
index 84df716..8c7937d 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/KeyedStream.java
@@ -793,7 +793,7 @@ public class KeyedStream extends DataStream {
 * per key.
 *
 * @param positionToMax
-*The field position in the data points to minimize. This 
is applicable to
+*The field position in the data points to maximize. This 
is applicable to
 *Tuple types, Scala case classes, and primitive types 
(which is considered
 *as having one field).
 * @return The transformed DataStream.



[flink] 01/10: [hotfix][docs] Correct method name in KeyedStateReaderFunction example

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a55dc42e6762e3e74ed36129a9dd6fc4c51f646f
Author: David Anderson 
AuthorDate: Fri Aug 23 10:29:23 2019 +0200

[hotfix][docs] Correct method name in KeyedStateReaderFunction example

This closes #9520
---
 docs/dev/libs/state_processor_api.md| 2 +-
 docs/dev/libs/state_processor_api.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/dev/libs/state_processor_api.md 
b/docs/dev/libs/state_processor_api.md
index 676ac49..75a6f12 100644
--- a/docs/dev/libs/state_processor_api.md
+++ b/docs/dev/libs/state_processor_api.md
@@ -290,7 +290,7 @@ class ReaderFunction extends 
KeyedStateReaderFunction {
   }
  
   @Override
-  public void processKey(
+  public void readKey(
 Integer key,
 Context ctx,
 Collector out) throws Exception {
diff --git a/docs/dev/libs/state_processor_api.zh.md 
b/docs/dev/libs/state_processor_api.zh.md
index 676ac49..75a6f12 100644
--- a/docs/dev/libs/state_processor_api.zh.md
+++ b/docs/dev/libs/state_processor_api.zh.md
@@ -290,7 +290,7 @@ class ReaderFunction extends 
KeyedStateReaderFunction {
   }
  
   @Override
-  public void processKey(
+  public void readKey(
 Integer key,
 Context ctx,
 Collector out) throws Exception {



[flink] 10/10: [FLINK-13726][docs] Build docs with jekyll 4.0.0.pre.beta1

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ac1b8dbf15c405d0646671a138a53c9953153165
Author: Nico Kruber 
AuthorDate: Wed Aug 14 23:05:00 2019 +0200

[FLINK-13726][docs] Build docs with jekyll 4.0.0.pre.beta1

This significantly reduces the build times, on my machine from 140s to 47s!

This closes #9444
---
 docs/.gitignore   |  3 ++-
 docs/Gemfile  |  4 ++--
 docs/Gemfile.lock | 19 +++
 docs/README.md|  3 +--
 4 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/docs/.gitignore b/docs/.gitignore
index 98b6f6b..6b3ce42 100644
--- a/docs/.gitignore
+++ b/docs/.gitignore
@@ -1,6 +1,7 @@
 .bundle/
 .jekyll-metadata
+.jekyll-cache/
 .rubydeps/
 content/
 ruby2/.bundle/
-ruby2/.rubydeps/
\ No newline at end of file
+ruby2/.rubydeps/
diff --git a/docs/Gemfile b/docs/Gemfile
index ef5086e..f7ff66a 100644
--- a/docs/Gemfile
+++ b/docs/Gemfile
@@ -18,9 +18,9 @@
 
 source 'https://rubygems.org'
 
-ruby '>= 2.1.0'
+ruby '>= 2.4.0'
 
-gem 'jekyll', '3.7.2'
+gem 'jekyll', '4.0.0.pre.beta1'
 gem 'addressable', '2.6.0'
 gem 'octokit', '4.14.0'
 gem 'therubyracer', '0.12.3'
diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock
index f6aedee..185af92 100644
--- a/docs/Gemfile.lock
+++ b/docs/Gemfile.lock
@@ -14,20 +14,21 @@ GEM
 ffi (1.11.1)
 forwardable-extended (2.6.0)
 http_parser.rb (0.6.0)
-i18n (0.9.5)
+i18n (1.6.0)
   concurrent-ruby (~> 1.0)
-jekyll (3.7.2)
+jekyll (4.0.0.pre.beta1)
   addressable (~> 2.4)
   colorator (~> 1.0)
   em-websocket (~> 0.5)
-  i18n (~> 0.7)
+  i18n (>= 0.9.5, < 2)
   jekyll-sass-converter (~> 1.0)
   jekyll-watch (~> 2.0)
-  kramdown (~> 1.14)
+  kramdown (~> 2.1)
+  kramdown-parser-gfm (~> 1.0)
   liquid (~> 4.0)
   mercenary (~> 0.3.3)
   pathutil (~> 0.9)
-  rouge (>= 1.7, < 4)
+  rouge (~> 3.0)
   safe_yaml (~> 1.0)
 jekyll-multiple-languages (2.0.3)
 jekyll-paginate (1.1.0)
@@ -36,7 +37,9 @@ GEM
 jekyll-watch (2.2.1)
   listen (~> 3.0)
 json (2.2.0)
-kramdown (1.17.0)
+kramdown (2.1.0)
+kramdown-parser-gfm (1.1.0)
+  kramdown (~> 2.0)
 libv8 (3.16.14.19)
 liquid (4.0.3)
 liquid-c (4.0.0)
@@ -80,7 +83,7 @@ PLATFORMS
 
 DEPENDENCIES
   addressable (= 2.6.0)
-  jekyll (= 3.7.2)
+  jekyll (= 4.0.0.pre.beta1)
   jekyll-multiple-languages (= 2.0.3)
   jekyll-paginate (= 1.1.0)
   json (= 2.2.0)
@@ -90,7 +93,7 @@ DEPENDENCIES
   therubyracer (= 0.12.3)
 
 RUBY VERSION
-   ruby 2.3.1p112
+   ruby 2.6.3p62
 
 BUNDLED WITH
1.17.2
diff --git a/docs/README.md b/docs/README.md
index 924fcba..c3d5f63 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -42,8 +42,7 @@ If you call the script with the preview flag `build_docs.sh 
-p`, Jekyll will
 start a web server at `localhost:4000` and watch the docs directory for
 updates. Use this mode to preview changes locally. 
 
-If you have ruby 2.0 or greater, 
-you can call the script with the incremental flag `build_docs.sh -i`.
+You can call the script with the incremental flag `build_docs.sh -i`.
 Jekyll will then serve a live preview at `localhost:4000`,
 and it will be much faster because it will only rebuild the pages corresponding
 to files that are modified. Note that if you are making changes that affect



[flink] 09/10: [hotfix][docs] Temporarily disable liveserve

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit f802e16b06b0c3a3682af7f9017f9c0a69e5d4de
Author: Nico Kruber 
AuthorDate: Wed Aug 14 23:00:09 2019 +0200

[hotfix][docs] Temporarily disable liveserve

./build_docs.sh -i previously did not only enable incremental documentation
building while serving the docs, it also enabled a 'liveserve' mode that
automatically reloaded pages in the browser when they changed. This is based
on the 'hawkins' module which is not (yet) compatible with jekyll 4.0 which 
we
need to (significantly) improve build times.

This disables the liveserve mode and remove the hawkins module until a new
version is available.
---
 docs/Gemfile   | 6 +++---
 docs/Gemfile.lock  | 4 
 docs/build_docs.sh | 2 +-
 3 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/docs/Gemfile b/docs/Gemfile
index 70bd4df..ef5086e 100644
--- a/docs/Gemfile
+++ b/docs/Gemfile
@@ -30,6 +30,6 @@ gem 'jekyll-paginate', '1.1.0'
 gem 'liquid-c', '4.0.0' # speed-up site generation
 gem 'sassc', '2.0.1' # speed-up site generation
 
-group :jekyll_plugins do
-  gem 'hawkins'
-end
+# group :jekyll_plugins do
+#   gem 'hawkins'
+# end
diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock
index d8bd82e..f6aedee 100644
--- a/docs/Gemfile.lock
+++ b/docs/Gemfile.lock
@@ -13,9 +13,6 @@ GEM
   multipart-post (>= 1.2, < 3)
 ffi (1.11.1)
 forwardable-extended (2.6.0)
-hawkins (2.0.5)
-  em-websocket (~> 0.5)
-  jekyll (~> 3.1)
 http_parser.rb (0.6.0)
 i18n (0.9.5)
   concurrent-ruby (~> 1.0)
@@ -83,7 +80,6 @@ PLATFORMS
 
 DEPENDENCIES
   addressable (= 2.6.0)
-  hawkins
   jekyll (= 3.7.2)
   jekyll-multiple-languages (= 2.0.3)
   jekyll-paginate (= 1.1.0)
diff --git a/docs/build_docs.sh b/docs/build_docs.sh
index 1ab46f1..aecac83 100755
--- a/docs/build_docs.sh
+++ b/docs/build_docs.sh
@@ -64,7 +64,7 @@ while getopts "piez" opt; do
;;
i)
[[ `${RUBY} -v` =~ 'ruby 1' ]] && echo "Error: building the 
docs with the incremental option requires at least ruby 2.0" && exit 1
-   JEKYLL_CMD="liveserve --baseurl= --watch --incremental"
+   JEKYLL_CMD="serve --baseurl= --watch --incremental"
;;
e)
JEKYLL_CONFIG="--config _config.yml,_config_dev_en.yml"



[flink] 03/10: [hotfix][table api] Fix logger arguments in CatalogManager

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 9bda229dd362091c9889c026fb77928451faa7c7
Author: Jeff Zhang 
AuthorDate: Fri Aug 9 14:42:58 2019 +0800

[hotfix][table api] Fix logger arguments in CatalogManager

This closes #9401
---
 .../src/main/java/org/apache/flink/table/catalog/CatalogManager.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
index 839d5a9..5647709 100644
--- 
a/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
+++ 
b/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
@@ -182,8 +182,8 @@ public class CatalogManager {
 
LOG.info(
"Set the current default database as [{}] in 
the current default catalog [{}].",
-   currentCatalogName,
-   currentDatabaseName);
+   currentDatabaseName,
+   currentCatalogName);
}
}
 



[flink] 08/10: [FLINK-13725][docs] use sassc for faster doc generation

2019-08-26 Thread sewen
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 065de4b573a05b0c3436ff2d3af3e0c16589a1a7
Author: Nico Kruber 
AuthorDate: Wed Aug 14 17:29:43 2019 +0200

[FLINK-13725][docs] use sassc for faster doc generation

Jekyll requires sass but can optionally also use a C-based implementation
provided by sassc. Although we do not use sass directly, there may be some
indirect use inside jekyll. It doesn't seem to hurt to upgrade here.

This closes #9443
---
 docs/Gemfile  | 1 +
 docs/Gemfile.lock | 5 +
 2 files changed, 6 insertions(+)

diff --git a/docs/Gemfile b/docs/Gemfile
index eb307fd..70bd4df 100644
--- a/docs/Gemfile
+++ b/docs/Gemfile
@@ -28,6 +28,7 @@ gem 'json', '2.2.0'
 gem 'jekyll-multiple-languages', '2.0.3'
 gem 'jekyll-paginate', '1.1.0'
 gem 'liquid-c', '4.0.0' # speed-up site generation
+gem 'sassc', '2.0.1' # speed-up site generation
 
 group :jekyll_plugins do
   gem 'hawkins'
diff --git a/docs/Gemfile.lock b/docs/Gemfile.lock
index 68e66d3..d8bd82e 100644
--- a/docs/Gemfile.lock
+++ b/docs/Gemfile.lock
@@ -55,6 +55,7 @@ GEM
 pathutil (0.16.2)
   forwardable-extended (~> 2.6)
 public_suffix (3.1.1)
+rake (12.3.3)
 rb-fsevent (0.10.3)
 rb-inotify (0.10.0)
   ffi (~> 1.0)
@@ -67,6 +68,9 @@ GEM
 sass-listen (4.0.0)
   rb-fsevent (~> 0.9, >= 0.9.4)
   rb-inotify (~> 0.9, >= 0.9.7)
+sassc (2.0.1)
+  ffi (~> 1.9)
+  rake
 sawyer (0.8.2)
   addressable (>= 2.3.5)
   faraday (> 0.8, < 2.0)
@@ -86,6 +90,7 @@ DEPENDENCIES
   json (= 2.2.0)
   liquid-c (= 4.0.0)
   octokit (= 4.14.0)
+  sassc (= 2.0.1)
   therubyracer (= 0.12.3)
 
 RUBY VERSION