[flink] branch master updated (335c47e -> c137102)

2020-07-28 Thread zhijiang
This is an automated email from the ASF dual-hosted git repository.

zhijiang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 335c47e  [FLINK-18281] Add window stagger to TumblingEventTimeWindow
 add c137102  [FLINK-18595][network] Fix the deadlock of concurrently 
recycling buffer and releasing input channel

No new revisions were added by this update.

Summary of changes:
 .../runtime/io/network/partition/consumer/BufferManager.java   | 10 ++
 1 file changed, 10 insertions(+)



[flink] branch master updated (335c47e -> c137102)

2020-07-28 Thread zhijiang
This is an automated email from the ASF dual-hosted git repository.

zhijiang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 335c47e  [FLINK-18281] Add window stagger to TumblingEventTimeWindow
 add c137102  [FLINK-18595][network] Fix the deadlock of concurrently 
recycling buffer and releasing input channel

No new revisions were added by this update.

Summary of changes:
 .../runtime/io/network/partition/consumer/BufferManager.java   | 10 ++
 1 file changed, 10 insertions(+)



[flink] branch master updated (335c47e -> c137102)

2020-07-28 Thread zhijiang
This is an automated email from the ASF dual-hosted git repository.

zhijiang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 335c47e  [FLINK-18281] Add window stagger to TumblingEventTimeWindow
 add c137102  [FLINK-18595][network] Fix the deadlock of concurrently 
recycling buffer and releasing input channel

No new revisions were added by this update.

Summary of changes:
 .../runtime/io/network/partition/consumer/BufferManager.java   | 10 ++
 1 file changed, 10 insertions(+)



[flink-web] 01/02: Link blogposts

2020-07-28 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 177fe4fbe3028b2b9f9ff00e56ce15665ef4a880
Author: Alexander Fedulov <1492164+afedu...@users.noreply.github.com>
AuthorDate: Thu Jul 2 14:38:40 2020 +0200

Link blogposts

This closes #354.
---
 _posts/2020-01-15-demo-fraud-detection.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/_posts/2020-01-15-demo-fraud-detection.md 
b/_posts/2020-01-15-demo-fraud-detection.md
index 96a3c27..291dde7 100644
--- a/_posts/2020-01-15-demo-fraud-detection.md
+++ b/_posts/2020-01-15-demo-fraud-detection.md
@@ -13,7 +13,7 @@ excerpt: In this series of blog posts you will learn about 
three powerful Flink
 
 In this series of blog posts you will learn about three powerful Flink 
patterns for building streaming applications:
 
- - Dynamic updates of application logic
+ - [Dynamic updates of application logic]({{ site.baseurl 
}}/news/2020/03/24/demo-fraud-detection-2.html)
  - Dynamic data partitioning (shuffle), controlled at runtime
  - Low latency alerting based on custom windowing logic (without using the 
window API)
 
@@ -219,4 +219,4 @@ In the second part of this series, we will describe how the 
rules make their way
 
 
 
-In the next article, we will see how Flink's broadcast streams can be utilized 
to help steer the processing within the Fraud Detection engine at runtime 
(Dynamic Application Updates pattern).
+In the [next article]({{ site.baseurl 
}}/news/2020/03/24/demo-fraud-detection-2.html), we will see how Flink's 
broadcast streams can be utilized to help steer the processing within the Fraud 
Detection engine at runtime (Dynamic Application Updates pattern).



[flink-web] 02/02: Rebuild website

2020-07-28 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 55b6c7c4379ca3ba6dfca5b720c4aa167ab4f779
Author: Nico Kruber 
AuthorDate: Tue Jul 28 16:52:43 2020 +0200

Rebuild website
---
 content/blog/feed.xml | 6 +++---
 content/news/2020/01/15/demo-fraud-detection.html | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index a77152d..4f96e80 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -13,7 +13,7 @@
 

In the following sections, we describe how to integrate Kafka, MySQL, Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in real-time. All exercises in this blogpost are performed in the Flink SQL CLI, and the entire process uses standard SQL syntax, without a single line of Java/Scala code or IDE installation. The final result of this demo is shown in the following figure:

-Demo Overview +Demo Overview


@@ -5125,7 +5125,7 @@ However, you need to take care of another aspect, which is providing timestamps

In this series of blog posts you will learn about three powerful Flink patterns for building streaming applications:

    -
  • Dynamic updates of application logic
  • +
  • Dynamic updates of application logic
  • Dynamic data partitioning (shuffle), controlled at runtime
  • Low latency alerting based on custom windowing logic (without using the window API)
@@ -5325,7 +5325,7 @@ To understand why this is the case, let us start with articulating a realistic s


-

In the next article, we will see how Flink’s broadcast streams can be utilized to help steer the processing within the Fraud Detection engine at runtime (Dynamic Application Updates pattern).

+

In the next article, we will see how Flink’s broadcast streams can be utilized to help steer the processing within the Fraud Detection engine at runtime (Dynamic Application Updates pattern).

Wed, 15 Jan 2020 13:00:00 +0100 https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html diff --git a/content/news/2020/01/15/demo-fraud-detection.html b/content/news/2020/01/15/demo-fraud-detection.html index 22fe277..dcb51b4 100644 --- a/content/news/2020/01/15/demo-fraud-detection.html +++ b/content/news/2020/01/15/demo-fraud-detection.html @@ -200,7 +200,7 @@ In this series of blog posts you will learn about three powerful Flink patterns for building streaming applications: - Dynamic updates of application logic + Dynamic updates of application logic Dynamic data partitioning (shuffle), controlled at runtime Low latency alerting based on custom windowing logic (without using the window API) @@ -400,7 +400,7 @@ To understand why this is the case, let us start with articulating a realistic s -In the next article, we will see how Flink’s broadcast streams can be utilized to help steer the processing within the Fraud Detection engine at runtime (Dynamic Application Updates pattern). +In the next article, we will see how Flink’s broadcast streams can be utilized to help steer the processing within the Fraud Detection engine at runtime (Dynamic Application Updates pattern).

[flink-web] branch asf-site updated (fb6e73a -> 55b6c7c)

2020-07-28 Thread nkruber
This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from fb6e73a  fix links and rebuild
 new 177fe4f  Link blogposts
 new 55b6c7c  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2020-01-15-demo-fraud-detection.md | 4 ++--
 content/blog/feed.xml | 6 +++---
 content/news/2020/01/15/demo-fraud-detection.html | 4 ++--
 3 files changed, 7 insertions(+), 7 deletions(-)



[flink-web] branch asf-site updated (a125d4c -> fb6e73a)

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from a125d4c  rebuild website
 add fb6e73a  fix links and rebuild

No new revisions were added by this update.

Summary of changes:
 ...-sql-demo-building-e2e-streaming-application.md | 2 +-
 ...ql-demo-building-e2e-streaming-application.html | 2 +-
 content/blog/feed.xml  | 19067 ++-
 content/blog/index.html|38 +-
 content/index.html |21 +-
 content/zh/index.html  |21 +-
 6 files changed, 10036 insertions(+), 9115 deletions(-)



[flink] branch master updated (2f03841 -> 335c47e)

2020-07-28 Thread aljoscha
This is an automated email from the ASF dual-hosted git repository.

aljoscha pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 2f03841  [FLINK-18581] Do not try to run GC phantom cleaners for jdk < 
8u72
 add 335c47e  [FLINK-18281] Add window stagger to TumblingEventTimeWindow

No new revisions were added by this update.

Summary of changes:
 .../assigners/TumblingEventTimeWindows.java| 36 +-
 .../windowing/assigners/TumblingTimeWindows.java   |  2 +-
 .../windowing/TumblingEventTimeWindowsTest.java| 19 ++--
 3 files changed, 47 insertions(+), 10 deletions(-)



[flink] branch master updated: [FLINK-18281] Add window stagger to TumblingEventTimeWindow

2020-07-28 Thread aljoscha
This is an automated email from the ASF dual-hosted git repository.

aljoscha pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 335c47e  [FLINK-18281] Add window stagger to TumblingEventTimeWindow
335c47e is described below

commit 335c47e11478358e8514e63ca807ea765ed9dd9a
Author: Niel Hu 
AuthorDate: Fri Jun 12 11:55:05 2020 -0700

[FLINK-18281] Add window stagger to TumblingEventTimeWindow
---
 .../assigners/TumblingEventTimeWindows.java| 36 +-
 .../windowing/assigners/TumblingTimeWindows.java   |  2 +-
 .../windowing/TumblingEventTimeWindowsTest.java| 19 ++--
 3 files changed, 47 insertions(+), 10 deletions(-)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingEventTimeWindows.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingEventTimeWindows.java
index 30a49de..e1aabcd 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingEventTimeWindows.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingEventTimeWindows.java
@@ -48,22 +48,30 @@ public class TumblingEventTimeWindows extends 
WindowAssigner
 
private final long size;
 
-   private final long offset;
+   private final long globalOffset;
 
-   protected TumblingEventTimeWindows(long size, long offset) {
+   private Long staggerOffset = null;
+
+   private final WindowStagger windowStagger;
+
+   protected TumblingEventTimeWindows(long size, long offset, 
WindowStagger windowStagger) {
if (Math.abs(offset) >= size) {
throw new 
IllegalArgumentException("TumblingEventTimeWindows parameters must satisfy 
abs(offset) < size");
}
 
this.size = size;
-   this.offset = offset;
+   this.globalOffset = offset;
+   this.windowStagger = windowStagger;
}
 
@Override
public Collection assignWindows(Object element, long 
timestamp, WindowAssignerContext context) {
if (timestamp > Long.MIN_VALUE) {
+   if (staggerOffset == null) {
+   staggerOffset = 
windowStagger.getStaggerOffset(context.getCurrentProcessingTime(), size);
+   }
// Long.MIN_VALUE is currently assigned when no 
timestamp is present
-   long start = 
TimeWindow.getWindowStartWithOffset(timestamp, offset, size);
+   long start = 
TimeWindow.getWindowStartWithOffset(timestamp, (globalOffset + staggerOffset) % 
size, size);
return Collections.singletonList(new TimeWindow(start, 
start + size));
} else {
throw new RuntimeException("Record has Long.MIN_VALUE 
timestamp (= no timestamp marker). " +
@@ -90,7 +98,7 @@ public class TumblingEventTimeWindows extends 
WindowAssigner
 * @return The time policy.
 */
public static TumblingEventTimeWindows of(Time size) {
-   return new TumblingEventTimeWindows(size.toMilliseconds(), 0);
+   return new TumblingEventTimeWindows(size.toMilliseconds(), 0, 
WindowStagger.ALIGNED);
}
 
/**
@@ -108,10 +116,24 @@ public class TumblingEventTimeWindows extends 
WindowAssigner
 *
 * @param size The size of the generated windows.
 * @param offset The offset which window start would be shifted by.
-* @return The time policy.
 */
public static TumblingEventTimeWindows of(Time size, Time offset) {
-   return new TumblingEventTimeWindows(size.toMilliseconds(), 
offset.toMilliseconds());
+   return new TumblingEventTimeWindows(size.toMilliseconds(), 
offset.toMilliseconds(), WindowStagger.ALIGNED);
+   }
+
+
+   /**
+* Creates a new {@code TumblingEventTimeWindows} {@link 
WindowAssigner} that assigns
+* elements to time windows based on the element timestamp, offset and 
a staggering offset,
+* depending on the staggering policy.
+*
+* @param size The size of the generated windows.
+* @param offset The globalOffset which window start would be shifted 
by.
+* @param windowStagger The utility that produces staggering offset in 
runtime.
+*/
+   @PublicEvolving
+   public static TumblingEventTimeWindows of(Time size, Time offset, 
WindowStagger windowStagger) {
+   return new TumblingEventTimeWindows(size.toMilliseconds(), 
offset.toMilliseconds(), windowStagger);
}
 
@Override
diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingTimeWindows.jav

[flink-web] 02/06: Update contents to 1.11.0

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 4d92336fcc08be53bf53787d63ed71f413742439
Author: Jark Wu 
AuthorDate: Sun Jul 5 11:19:17 2020 +0800

Update contents to 1.11.0
---
 ...-sql-demo-building-e2e-streaming-application.md | 182 -
 img/blog/2020-05-03-flink-sql-demo/image1.gif  | Bin 0 -> 964219 bytes
 img/blog/2020-05-03-flink-sql-demo/image1.png  | Bin 169097 -> 0 bytes
 img/blog/2020-05-03-flink-sql-demo/image2.png  | Bin 82835 -> 0 bytes
 img/blog/2020-05-03-flink-sql-demo/image3.png  | Bin 509096 -> 99027 bytes
 img/blog/2020-05-03-flink-sql-demo/image4.jpg  | Bin 0 -> 418490 bytes
 img/blog/2020-05-03-flink-sql-demo/image4.png  | Bin 189699 -> 0 bytes
 img/blog/2020-05-03-flink-sql-demo/image5.jpg  | Bin 0 -> 262516 bytes
 img/blog/2020-05-03-flink-sql-demo/image5.png  | Bin 125472 -> 0 bytes
 img/blog/2020-05-03-flink-sql-demo/image6.jpg  | Bin 0 -> 296556 bytes
 img/blog/2020-05-03-flink-sql-demo/image6.png  | Bin 161101 -> 0 bytes
 img/blog/2020-05-03-flink-sql-demo/image7.jpg  | Bin 0 -> 336316 bytes
 img/blog/2020-05-03-flink-sql-demo/image7.png  | Bin 161440 -> 0 bytes
 img/blog/2020-05-03-flink-sql-demo/image8.jpg  | Bin 0 -> 322865 bytes
 img/blog/2020-05-03-flink-sql-demo/image8.png  | Bin 151899 -> 0 bytes
 15 files changed, 70 insertions(+), 112 deletions(-)

diff --git 
a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md 
b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
index 0af26a8..14450a5 100644
--- a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
+++ b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
@@ -7,33 +7,35 @@ authors:
 - jark:
   name: "Jark Wu"
   twitter: "JarkWu"
-excerpt: Apache Flink 1.10 has released many exciting new features, including 
many developments in Flink SQL which is evolving at a fast pace. This article 
takes a closer look at how to quickly build streaming applications with Flink 
SQL from a practical point of view.
+excerpt: Apache Flink 1.11 has released many exciting new features, including 
many developments in Flink SQL which is evolving at a fast pace. This article 
takes a closer look at how to quickly build streaming applications with Flink 
SQL from a practical point of view.
 ---
 
-Apache Flink 1.10.0 has released many exciting new features, including many 
developments in Flink SQL which is evolving at a fast pace. This article takes 
a closer look at how to quickly build streaming applications with Flink SQL 
from a practical point of view.
+Apache Flink 1.11.0 has released many exciting new features, including many 
developments in Flink SQL which is evolving at a fast pace. This article takes 
a closer look at how to quickly build streaming applications with Flink SQL 
from a practical point of view.
 
 In the following sections, we describe how to integrate Kafka, MySQL, 
Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in 
real-time. All exercises in this article are performed in the Flink SQL CLI, 
while the entire process uses standard SQL syntax, without a single line of 
Java or Scala code or IDE installation. The final result of this demo is shown 
in the following figure:
 
 
-
+
 
 
 
 # Preparation
 
-Prepare a Linux or MacOS computer with Docker and Java 8 installed. A Java 
environment is required because we will install and run a Flink cluster in the 
host environment, not in a Docker container.
+Prepare a Linux or MacOS computer with Docker installed.
 
 ## Use Docker Compose to Start Demo Environment
 
-The components required in this demo (except for Flink) are all managed in 
containers, so we will use `docker-compose` to start them. First, download the 
`docker-compose.yml` file that defines the demo environment, for example by 
running the following commands:
+The components required in this demo are all managed in containers, so we will 
use `docker-compose` to start them. First, download the `docker-compose.yml` 
file that defines the demo environment, for example by running the following 
commands:
 
 ```
-mkdir flink-demo; cd flink-demo;
-wget 
https://raw.githubusercontent.com/wuchong/flink-sql-demo/master/docker-compose.yml
+mkdir flink-sql-demo; cd flink-sql-demo;
+wget 
https://raw.githubusercontent.com/wuchong/flink-sql-demo/v1.11-EN/docker-compose.yml
 ```
 
 The Docker Compose environment consists of the following containers:
 
+- **Flink SQL CLI:** It's used to submit queries and visualize their results.
+- **Flink Cluster:** A Flink master and a Flink worker container to execute 
queries.
 - **MySQL:** MySQL 5.7 and a `category` table in the database. The `category` 
table will be joined with data in Kafka to enrich the real-time data.
 - **Kafka:** It is mainly used as a

[flink-web] 04/06: rename file and images to 07-28

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit a6cd0ba98957f26be8312bd101a132905e5b67fb
Author: Jark Wu 
AuthorDate: Mon Jul 27 15:08:15 2020 +0800

rename file and images to 07-28
---
 ...28-flink-sql-demo-building-e2e-streaming-application.md} |  12 ++--
 .../img/blog/2020-07-28-flink-sql-demo}/image1.gif  | Bin
 .../img/blog/2020-07-28-flink-sql-demo}/image3.png  | Bin
 .../img/blog/2020-07-28-flink-sql-demo}/image4.jpg  | Bin
 .../img/blog/2020-07-28-flink-sql-demo}/image5.jpg  | Bin
 .../img/blog/2020-07-28-flink-sql-demo}/image6.jpg  | Bin
 .../img/blog/2020-07-28-flink-sql-demo}/image7.jpg  | Bin
 .../img/blog/2020-07-28-flink-sql-demo}/image8.jpg  | Bin
 .../image1.gif  | Bin
 .../image3.png  | Bin
 .../image4.jpg  | Bin
 .../image5.jpg  | Bin
 .../image6.jpg  | Bin
 .../image7.jpg  | Bin
 .../image8.jpg  | Bin
 15 files changed, 6 insertions(+), 6 deletions(-)

diff --git 
a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md 
b/_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
similarity index 97%
rename from 
_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
rename to _posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
index 0950b45..bc5bac0 100644
--- a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
+++ b/_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
@@ -73,7 +73,7 @@ The command starts the SQL CLI client in the container.
 You should see the welcome screen of the CLI client.
 
 
-
+
 
 
 
@@ -156,7 +156,7 @@ Here, we use the built-in `HOUR` function to extract the 
value for each hour in
 After running the previous query in the Flink SQL CLI, we can observe the 
submitted task on the [Flink Web UI](http://localhost:8081). This task is a 
streaming task and therefore runs continuously.
 
 
-
+
 
 
 
@@ -172,14 +172,14 @@ Since we are using the TUMBLE window of one hour here, it 
might take about four
 Click "Discover" in the left-side toolbar. Kibana lists the content of the 
created index.
 
 
-
+
 
 
 
 Next, create a dashboard to display various views. Click "Dashboard" on the 
left side of the page to create a dashboard named "User Behavior Analysis". 
Then, click "Create New" to create a new view. Select "Area" (area graph), then 
select the `buy_cnt_per_hour` index, and draw the trading volume area chart as 
illustrated in the configuration on the left side of the following diagram. 
Apply the changes by clicking the “▶” play button. Then, save it as "Hourly 
Trading Volume".
 
 
-
+
 
 
 
@@ -226,7 +226,7 @@ GROUP BY date_str;
 After submitting this query, we create a `cumulative_uv` index pattern in 
Kibana. We then create a "Line" (line graph) on the dashboard, by selecting the 
`cumulative_uv` index, and drawing the cumulative UV curve according to the 
configuration on the left side of the following figure before finally saving 
the curve.
 
 
-
+
 
 
 
@@ -290,7 +290,7 @@ GROUP BY category_name;
 After submitting the query, we create a `top_category` index pattern in 
Kibana. We then  create a "Horizontal Bar" (bar graph) on the dashboard, by 
selecting the `top_category` index and drawing the category ranking according 
to the configuration on the left side of the following diagram before finally 
saving the list.
 
 
-
+
 
 
 
diff --git a/img/blog/2020-05-03-flink-sql-demo/image1.gif 
b/content/img/blog/2020-07-28-flink-sql-demo/image1.gif
similarity index 100%
copy from img/blog/2020-05-03-flink-sql-demo/image1.gif
copy to content/img/blog/2020-07-28-flink-sql-demo/image1.gif
diff --git a/img/blog/2020-05-03-flink-sql-demo/image3.png 
b/content/img/blog/2020-07-28-flink-sql-demo/image3.png
similarity index 100%
copy from img/blog/2020-05-03-flink-sql-demo/image3.png
copy to content/img/blog/2020-07-28-flink-sql-demo/image3.png
diff --git a/img/blog/2020-05-03-flink-sql-demo/image4.jpg 
b/content/img/blog/2020-07-28-flink-sql-demo/image4.jpg
similarity index 100%
copy from img/blog/2020-05-03-flink-sql-demo/image4.jpg
copy to content/img/blog/2020-07-28-flink-sql-demo/image4.jpg
diff --git a/img/blog/2020-05-03-flink-sql-demo/image5.jpg 
b/content/img/blog/2020-07-28-flink-sql-demo/image5.jpg
similarity index 100%
copy from img/blog/2020-05-03-flink-sql-demo/image5.jpg
copy to content/img/blog/2020-07-28-flink-sql-demo/image5.jpg
diff --git a/img/blog/2020-05-03-flink-sql-demo/image6.jpg 
b/content/img/blog/2020-07-28-flink-sql-demo/image6.jpg
simi

[flink-web] 05/06: Update 1.11.0 to 1.11

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 88642c1bb12d6c9cf7f3f0e56a5343c1087d314f
Author: Jark Wu 
AuthorDate: Mon Jul 27 15:23:55 2020 +0800

Update 1.11.0 to 1.11
---
 _posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md 
b/_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
index bc5bac0..0c935c2 100644
--- a/_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
+++ b/_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
@@ -9,7 +9,7 @@ authors:
 excerpt: Apache Flink 1.11 has released many exciting new features, including 
many developments in Flink SQL which is evolving at a fast pace. This article 
takes a closer look at how to quickly build streaming applications with Flink 
SQL from a practical point of view.
 ---
 
-Apache Flink 1.11.0 has released many exciting new features, including many 
developments in Flink SQL which is evolving at a fast pace. This article takes 
a closer look at how to quickly build streaming applications with Flink SQL 
from a practical point of view.
+Apache Flink 1.11 has released many exciting new features, including many 
developments in Flink SQL which is evolving at a fast pace. This article takes 
a closer look at how to quickly build streaming applications with Flink SQL 
from a practical point of view.
 
 In the following sections, we describe how to integrate Kafka, MySQL, 
Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in 
real-time. All exercises in this blogpost are performed in the Flink SQL CLI, 
and the entire process uses standard SQL syntax, without a single line of 
Java/Scala code or IDE installation. The final result of this demo is shown in 
the following figure:
 



[flink-web] branch asf-site updated (cbe7390 -> a125d4c)

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from cbe7390  Rebuild website
 new 1ae017d  Add Blog: "Flink SQL Demo: Building an End to End Streaming 
Application"
 new 4d92336  Update contents to 1.11.0
 new adb8deb  Apply suggestions from code review
 new a6cd0ba  rename file and images to 07-28
 new 88642c1  Update 1.11.0 to 1.11
 new a125d4c  rebuild website

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 ...-sql-demo-building-e2e-streaming-application.md | 305 
 ...ql-demo-building-e2e-streaming-application.html | 536 +
 content/blog/page10/index.html |  42 +-
 content/blog/page11/index.html |  40 +-
 content/blog/page12/index.html |  45 +-
 content/blog/{page6 => page13}/index.html  | 164 +--
 content/blog/page2/index.html  |  38 +-
 content/blog/page3/index.html  |  40 +-
 content/blog/page4/index.html  |  40 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  42 +-
 content/blog/page7/index.html  |  45 +-
 content/blog/page8/index.html  |  43 +-
 content/blog/page9/index.html  |  40 +-
 .../img/blog/2020-07-28-flink-sql-demo/image1.gif  | Bin 0 -> 964219 bytes
 .../img/blog/2020-07-28-flink-sql-demo/image3.png  | Bin 0 -> 99027 bytes
 .../img/blog/2020-07-28-flink-sql-demo/image4.jpg  | Bin 0 -> 418490 bytes
 .../img/blog/2020-07-28-flink-sql-demo/image5.jpg  | Bin 0 -> 262516 bytes
 .../img/blog/2020-07-28-flink-sql-demo/image6.jpg  | Bin 0 -> 296556 bytes
 .../img/blog/2020-07-28-flink-sql-demo/image7.jpg  | Bin 0 -> 336316 bytes
 .../img/blog/2020-07-28-flink-sql-demo/image8.jpg  | Bin 0 -> 322865 bytes
 img/blog/2020-07-28-flink-sql-demo/image1.gif  | Bin 0 -> 964219 bytes
 img/blog/2020-07-28-flink-sql-demo/image3.png  | Bin 0 -> 99027 bytes
 img/blog/2020-07-28-flink-sql-demo/image4.jpg  | Bin 0 -> 418490 bytes
 img/blog/2020-07-28-flink-sql-demo/image5.jpg  | Bin 0 -> 262516 bytes
 img/blog/2020-07-28-flink-sql-demo/image6.jpg  | Bin 0 -> 296556 bytes
 img/blog/2020-07-28-flink-sql-demo/image7.jpg  | Bin 0 -> 336316 bytes
 img/blog/2020-07-28-flink-sql-demo/image8.jpg  | Bin 0 -> 322865 bytes
 28 files changed, 1141 insertions(+), 319 deletions(-)
 create mode 100644 
_posts/2020-07-28-flink-sql-demo-building-e2e-streaming-application.md
 create mode 100644 
content/2020/07/28/flink-sql-demo-building-e2e-streaming-application.html
 copy content/blog/{page6 => page13}/index.html (86%)
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image1.gif
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image3.png
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image4.jpg
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image5.jpg
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image6.jpg
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image7.jpg
 create mode 100644 content/img/blog/2020-07-28-flink-sql-demo/image8.jpg
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image1.gif
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image3.png
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image4.jpg
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image5.jpg
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image6.jpg
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image7.jpg
 create mode 100644 img/blog/2020-07-28-flink-sql-demo/image8.jpg



[flink-web] 03/06: Apply suggestions from code review

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit adb8deb384a3615b1710fe6556a9d69bb410cfdb
Author: Jark Wu 
AuthorDate: Mon Jul 27 14:56:21 2020 +0800

Apply suggestions from code review

Co-authored-by: morsapaes 
---
 ...-sql-demo-building-e2e-streaming-application.md | 73 --
 1 file changed, 40 insertions(+), 33 deletions(-)

diff --git 
a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md 
b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
index 14450a5..0950b45 100644
--- a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
+++ b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
@@ -1,8 +1,7 @@
 ---
 layout: post
 title: "Flink SQL Demo: Building an End-to-End Streaming Application"
-date: 2020-05-03T12:00:00.000Z
-categories: news
+date: 2020-07-28T12:00:00.000Z
 authors:
 - jark:
   name: "Jark Wu"
@@ -12,10 +11,10 @@ excerpt: Apache Flink 1.11 has released many exciting new 
features, including ma
 
 Apache Flink 1.11.0 has released many exciting new features, including many 
developments in Flink SQL which is evolving at a fast pace. This article takes 
a closer look at how to quickly build streaming applications with Flink SQL 
from a practical point of view.
 
-In the following sections, we describe how to integrate Kafka, MySQL, 
Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in 
real-time. All exercises in this article are performed in the Flink SQL CLI, 
while the entire process uses standard SQL syntax, without a single line of 
Java or Scala code or IDE installation. The final result of this demo is shown 
in the following figure:
+In the following sections, we describe how to integrate Kafka, MySQL, 
Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in 
real-time. All exercises in this blogpost are performed in the Flink SQL CLI, 
and the entire process uses standard SQL syntax, without a single line of 
Java/Scala code or IDE installation. The final result of this demo is shown in 
the following figure:
 
 
-
+
 
 
 
@@ -23,7 +22,7 @@ In the following sections, we describe how to integrate 
Kafka, MySQL, Elasticsea
 
 Prepare a Linux or MacOS computer with Docker installed.
 
-## Use Docker Compose to Start Demo Environment
+## Starting the Demo Environment
 
 The components required in this demo are all managed in containers, so we will 
use `docker-compose` to start them. First, download the `docker-compose.yml` 
file that defines the demo environment, for example by running the following 
commands:
 
@@ -34,16 +33,19 @@ wget 
https://raw.githubusercontent.com/wuchong/flink-sql-demo/v1.11-EN/docker-co
 
 The Docker Compose environment consists of the following containers:
 
-- **Flink SQL CLI:** It's used to submit queries and visualize their results.
-- **Flink Cluster:** A Flink master and a Flink worker container to execute 
queries.
-- **MySQL:** MySQL 5.7 and a `category` table in the database. The `category` 
table will be joined with data in Kafka to enrich the real-time data.
-- **Kafka:** It is mainly used as a data source. The DataGen component 
automatically writes data into a Kafka topic.
-- **Zookeeper:** This component is required by Kafka.
-- **Elasticsearch:** It is mainly used as a data sink.
-- **Kibana:** It's used to visualize the data in Elasticsearch.
-- **DataGen:** It is the data generator. After the container is started, user 
behavior data is automatically generated and sent to the Kafka topic. By 
default, 2000 data entries are generated each second for about 1.5 hours. You 
can modify datagen's `speedup` parameter in `docker-compose.yml` to adjust the 
generation rate (which takes effect after docker compose is restarted).
+- **Flink SQL CLI:** used to submit queries and visualize their results.
+- **Flink Cluster:** a Flink JobManager and a Flink TaskManager container to 
execute queries.
+- **MySQL:** MySQL 5.7 and a pre-populated `category` table in the database. 
The `category` table will be joined with data in Kafka to enrich the real-time 
data.
+- **Kafka:** mainly used as a data source. The DataGen component automatically 
writes data into a Kafka topic.
+- **Zookeeper:** this component is required by Kafka.
+- **Elasticsearch:** mainly used as a data sink.
+- **Kibana:** used to visualize the data in Elasticsearch.
+- **DataGen:** the data generator. After the container is started, user 
behavior data is automatically generated and sent to the Kafka topic. By 
default, 2000 data entries are generated each second for about 1.5 hours. You 
can modify DataGen's `speedup` parameter in `docker-compose.yml` to adjust the 
generation rate (which takes effect after Docker Compose is restarted).
 
-**Important:** Before starting the containers, we recommend

[flink-web] 01/06: Add Blog: "Flink SQL Demo: Building an End to End Streaming Application"

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 1ae017dbedd9c361d04cf46c7eb10bbd7b760d82
Author: Jark Wu 
AuthorDate: Tue May 5 17:33:37 2020 +0800

Add Blog: "Flink SQL Demo: Building an End to End Streaming Application"
---
 ...-sql-demo-building-e2e-streaming-application.md | 340 +
 img/blog/2020-05-03-flink-sql-demo/image1.png  | Bin 0 -> 169097 bytes
 img/blog/2020-05-03-flink-sql-demo/image2.png  | Bin 0 -> 82835 bytes
 img/blog/2020-05-03-flink-sql-demo/image3.png  | Bin 0 -> 509096 bytes
 img/blog/2020-05-03-flink-sql-demo/image4.png  | Bin 0 -> 189699 bytes
 img/blog/2020-05-03-flink-sql-demo/image5.png  | Bin 0 -> 125472 bytes
 img/blog/2020-05-03-flink-sql-demo/image6.png  | Bin 0 -> 161101 bytes
 img/blog/2020-05-03-flink-sql-demo/image7.png  | Bin 0 -> 161440 bytes
 img/blog/2020-05-03-flink-sql-demo/image8.png  | Bin 0 -> 151899 bytes
 9 files changed, 340 insertions(+)

diff --git 
a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md 
b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
new file mode 100644
index 000..0af26a8
--- /dev/null
+++ b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
@@ -0,0 +1,340 @@
+---
+layout: post
+title: "Flink SQL Demo: Building an End-to-End Streaming Application"
+date: 2020-05-03T12:00:00.000Z
+categories: news
+authors:
+- jark:
+  name: "Jark Wu"
+  twitter: "JarkWu"
+excerpt: Apache Flink 1.10 has released many exciting new features, including 
many developments in Flink SQL which is evolving at a fast pace. This article 
takes a closer look at how to quickly build streaming applications with Flink 
SQL from a practical point of view.
+---
+
+Apache Flink 1.10.0 has released many exciting new features, including many 
developments in Flink SQL which is evolving at a fast pace. This article takes 
a closer look at how to quickly build streaming applications with Flink SQL 
from a practical point of view.
+
+In the following sections, we describe how to integrate Kafka, MySQL, 
Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in 
real-time. All exercises in this article are performed in the Flink SQL CLI, 
while the entire process uses standard SQL syntax, without a single line of 
Java or Scala code or IDE installation. The final result of this demo is shown 
in the following figure:
+
+
+
+
+
+
+# Preparation
+
+Prepare a Linux or MacOS computer with Docker and Java 8 installed. A Java 
environment is required because we will install and run a Flink cluster in the 
host environment, not in a Docker container.
+
+## Use Docker Compose to Start Demo Environment
+
+The components required in this demo (except for Flink) are all managed in 
containers, so we will use `docker-compose` to start them. First, download the 
`docker-compose.yml` file that defines the demo environment, for example by 
running the following commands:
+
+```
+mkdir flink-demo; cd flink-demo;
+wget 
https://raw.githubusercontent.com/wuchong/flink-sql-demo/master/docker-compose.yml
+```
+
+The Docker Compose environment consists of the following containers:
+
+- **MySQL:** MySQL 5.7 and a `category` table in the database. The `category` 
table will be joined with data in Kafka to enrich the real-time data.
+- **Kafka:** It is mainly used as a data source. The DataGen component 
automatically writes data into a Kafka topic.
+- **Zookeeper:** This component is required by Kafka.
+- **Elasticsearch:** It is mainly used as a data sink.
+- **Kibana:** It's used to visualize the data in Elasticsearch.
+- **DataGen:** It is the data generator. After the container is started, user 
behavior data is automatically generated and sent to the Kafka topic. By 
default, 2000 data entries are generated each second for about 1.5 hours. You 
can modify datagen's `speedup` parameter in `docker-compose.yml` to adjust the 
generation rate (which takes effect after docker compose is restarted).
+
+**Important:** Before starting the containers, we recommend configuring Docker 
so that sufficient resources are available and the environment does not become 
unresponsive. We suggest running Docker at 3-4 GB memory and 3-4 CPU cores.
+
+To start all containers, run the following command in the directory that 
contains the `docker-compose.yml` file.
+
+```
+docker-compose up -d
+```
+
+This command automatically starts all the containers defined in the Docker 
Compose configuration in a detached mode. Run `docker ps` to check whether the 
five containers are running properly. You can also visit 
[http://localhost:5601/](http://localhost:5601/) to see if Kibana is running 
normally.
+
+Don’t forget to run the following command to stop all containers after you 
finished the tutorial:
+
+```
+docker-compose down
+```
+
+## Downloa

[flink-web] 06/06: rebuild website

2020-07-28 Thread sjwiesman
This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit a125d4c0daf2856d5baa7477cb87d95fcd281369
Author: Seth Wiesman 
AuthorDate: Tue Jul 28 08:32:31 2020 -0500

rebuild website
---
 ...ql-demo-building-e2e-streaming-application.html | 536 +
 content/blog/page10/index.html |  42 +-
 content/blog/page11/index.html |  40 +-
 content/blog/page12/index.html |  45 +-
 content/blog/{page6 => page13}/index.html  | 164 +--
 content/blog/page2/index.html  |  38 +-
 content/blog/page3/index.html  |  40 +-
 content/blog/page4/index.html  |  40 +-
 content/blog/page5/index.html  |  40 +-
 content/blog/page6/index.html  |  42 +-
 content/blog/page7/index.html  |  45 +-
 content/blog/page8/index.html  |  43 +-
 content/blog/page9/index.html  |  40 +-
 13 files changed, 836 insertions(+), 319 deletions(-)

diff --git 
a/content/2020/07/28/flink-sql-demo-building-e2e-streaming-application.html 
b/content/2020/07/28/flink-sql-demo-building-e2e-streaming-application.html
new file mode 100644
index 000..9b4f9e1
--- /dev/null
+++ b/content/2020/07/28/flink-sql-demo-building-e2e-streaming-application.html
@@ -0,0 +1,536 @@
+
+
+  
+
+
+
+
+Apache Flink: Flink SQL Demo: Building an End-to-End Streaming 
Application
+
+
+
+
+https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css";>
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+
+
+
+
+
+
+
+  
+ 
+
+
+
+
+
+
+  
+
+
+
+  
+  
+
+  
+
+  
+
+
+
+
+  
+
+
+
+
+
+
+
+What is Apache 
Flink?
+
+
+
+  
+  Architecture
+  
+  
+  Applications
+  
+  
+  Operations
+  
+
+
+
+
+
+What is Stateful 
Functions?
+
+
+Use Cases
+
+
+Powered By
+
+
+ 
+
+
+
+Downloads
+
+
+
+  Getting Started
+  
+https://ci.apache.org/projects/flink/flink-docs-release-1.11/getting-started/index.html";
 target="_blank">With Flink 
+https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1/getting-started/project-setup.html";
 target="_blank">With Flink Stateful Functions 
+Training Course
+  
+
+
+
+
+  Documentation
+  
+https://ci.apache.org/projects/flink/flink-docs-release-1.11"; 
target="_blank">Flink 1.11 (Latest stable release) 
+https://ci.apache.org/projects/flink/flink-docs-master"; 
target="_blank">Flink Master (Latest Snapshot) 
+https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1"; 
target="_blank">Flink Stateful Functions 2.1 (Latest stable release) 

+https://ci.apache.org/projects/flink/flink-statefun-docs-master"; 
target="_blank">Flink Stateful Functions Master (Latest Snapshot) 
+  
+
+
+
+Getting Help
+
+
+Flink Blog
+
+
+
+
+  https://flink-packages.org"; 
target="_blank">flink-packages.org 
+
+ 
+
+
+
+
+Community & Project Info
+
+
+Roadmap
+
+
+How to 
Contribute
+
+
+
+
+  https://github.com/apache/flink"; target="_blank">Flink 
on GitHub 
+
+
+ 
+
+
+
+  
+
+  中文版
+
+  
+
+
+  
+
+  
+  
+
+
+https://twitter.com/apacheflink"; 
target="_blank">@ApacheFlink 
+
+
+Plan Visualizer 
+
+  
+
+https://apache.org"; target="_blank">Apache Software 
Foundation 
+
+
+  
+.smalllinks:link {
+  display: inline-block !important; background: none; 
padding-top: 0px; padding-bottom: 0px; padding-right: 0px; min-width: 75px;
+}
+  
+
+  https://www.apache.org/licenses/"; 
target="_blan

[flink] branch release-1.11 updated: [FLINK-18581] Do not try to run GC phantom cleaners for jdk < 8u72

2020-07-28 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new fe95187  [FLINK-18581] Do not try to run GC phantom cleaners for jdk < 
8u72
fe95187 is described below

commit fe95187edfe742b64a1f7147e57856c931ef05c3
Author: Andrey Zagrebin 
AuthorDate: Wed Jul 22 10:34:49 2020 +0300

[FLINK-18581] Do not try to run GC phantom cleaners for jdk < 8u72

The private JVM method Reference#tryHandlePending was introduced at Java 
8u72.
The explicit processing of queued phantom GC cleaners was exposed before 
8u72, also is was not used while reserving JVM direct memory.
Therefore, we can only hope that the GC will be triggered and the cleaners 
get processed in GC after some timeout.
This is suboptimal, therefore the PR changes Flink to not fail if the 
method is unavailable but logs a warning to upgrade Java.

This closes #12981.
---
 .../apache/flink/util/JavaGcCleanerWrapper.java| 33 +-
 .../flink/runtime/memory/UnsafeMemoryBudget.java   |  9 --
 .../flink/runtime/taskexecutor/slot/TaskSlot.java  |  6 +++-
 3 files changed, 31 insertions(+), 17 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java 
b/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java
index d02c9c5..1559434 100644
--- a/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java
+++ b/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java
@@ -83,7 +83,6 @@ public enum JavaGcCleanerWrapper {
"clean"),
new PendingCleanersRunnerProvider(
name,
-   reflectionUtils,
"tryHandlePending",
LEGACY_WAIT_FOR_REFERENCE_PROCESSING_ARGS,

LEGACY_WAIT_FOR_REFERENCE_PROCESSING_ARG_TYPES));
@@ -126,7 +125,6 @@ public enum JavaGcCleanerWrapper {
"clean"),
new PendingCleanersRunnerProvider(
name,
-   reflectionUtils,
"waitForReferenceProcessing",
JAVA9_WAIT_FOR_REFERENCE_PROCESSING_ARGS,
JAVA9_WAIT_FOR_REFERENCE_PROCESSING_ARG_TYPES));
@@ -189,12 +187,13 @@ public enum JavaGcCleanerWrapper {
private static class CleanerManager {
private final String cleanerName;
private final CleanerFactory cleanerFactory;
+   @Nullable
private final PendingCleanersRunner pendingCleanersRunner;
 
private CleanerManager(
String cleanerName,
CleanerFactory cleanerFactory,
-   PendingCleanersRunner pendingCleanersRunner) {
+   @Nullable PendingCleanersRunner 
pendingCleanersRunner) {
this.cleanerName = cleanerName;
this.cleanerFactory = cleanerFactory;
this.pendingCleanersRunner = pendingCleanersRunner;
@@ -205,7 +204,7 @@ public enum JavaGcCleanerWrapper {
}
 
private boolean tryRunPendingCleaners() throws 
InterruptedException {
-   return pendingCleanersRunner.tryRunPendingCleaners();
+   return pendingCleanersRunner != null && 
pendingCleanersRunner.tryRunPendingCleaners();
}
 
@Override
@@ -303,32 +302,38 @@ public enum JavaGcCleanerWrapper {
private static class PendingCleanersRunnerProvider {
private static final String REFERENCE_CLASS = 
"java.lang.ref.Reference";
private final String cleanerName;
-   private final ReflectionUtils reflectionUtils;
private final String waitForReferenceProcessingName;
private final Object[] waitForReferenceProcessingArgs;
private final Class[] waitForReferenceProcessingArgTypes;
 
private PendingCleanersRunnerProvider(
String cleanerName,
-   ReflectionUtils reflectionUtils,
String waitForReferenceProcessingName,
Object[] waitForReferenceProcessingArgs,
Class[] waitForReferenceProcessingArgTypes) {
this.cleanerName = cleanerName;
-   this.reflectionUtils = reflectionUtils;
this.waitForReferenceProcessingName = 
waitForReferenceProcessingName;
this.waitForReferenceProcessingArgs = 
wai

[flink] branch master updated: [FLINK-18581] Do not try to run GC phantom cleaners for jdk < 8u72

2020-07-28 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 2f03841  [FLINK-18581] Do not try to run GC phantom cleaners for jdk < 
8u72
2f03841 is described below

commit 2f03841d5414f9d4a4b810810317c0250065264e
Author: Andrey Zagrebin 
AuthorDate: Wed Jul 22 10:34:49 2020 +0300

[FLINK-18581] Do not try to run GC phantom cleaners for jdk < 8u72

The private JVM method Reference#tryHandlePending was introduced at Java 
8u72.
The explicit processing of queued phantom GC cleaners was exposed before 
8u72, also is was not used while reserving JVM direct memory.
Therefore, we can only hope that the GC will be triggered and the cleaners 
get processed in GC after some timeout.
This is suboptimal, therefore the PR changes Flink to not fail if the 
method is unavailable but logs a warning to upgrade Java.

This closes #12981.
---
 .../apache/flink/util/JavaGcCleanerWrapper.java| 33 +-
 .../flink/runtime/memory/UnsafeMemoryBudget.java   |  9 --
 .../flink/runtime/taskexecutor/slot/TaskSlot.java  |  6 +++-
 3 files changed, 31 insertions(+), 17 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java 
b/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java
index d02c9c5..1559434 100644
--- a/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java
+++ b/flink-core/src/main/java/org/apache/flink/util/JavaGcCleanerWrapper.java
@@ -83,7 +83,6 @@ public enum JavaGcCleanerWrapper {
"clean"),
new PendingCleanersRunnerProvider(
name,
-   reflectionUtils,
"tryHandlePending",
LEGACY_WAIT_FOR_REFERENCE_PROCESSING_ARGS,

LEGACY_WAIT_FOR_REFERENCE_PROCESSING_ARG_TYPES));
@@ -126,7 +125,6 @@ public enum JavaGcCleanerWrapper {
"clean"),
new PendingCleanersRunnerProvider(
name,
-   reflectionUtils,
"waitForReferenceProcessing",
JAVA9_WAIT_FOR_REFERENCE_PROCESSING_ARGS,
JAVA9_WAIT_FOR_REFERENCE_PROCESSING_ARG_TYPES));
@@ -189,12 +187,13 @@ public enum JavaGcCleanerWrapper {
private static class CleanerManager {
private final String cleanerName;
private final CleanerFactory cleanerFactory;
+   @Nullable
private final PendingCleanersRunner pendingCleanersRunner;
 
private CleanerManager(
String cleanerName,
CleanerFactory cleanerFactory,
-   PendingCleanersRunner pendingCleanersRunner) {
+   @Nullable PendingCleanersRunner 
pendingCleanersRunner) {
this.cleanerName = cleanerName;
this.cleanerFactory = cleanerFactory;
this.pendingCleanersRunner = pendingCleanersRunner;
@@ -205,7 +204,7 @@ public enum JavaGcCleanerWrapper {
}
 
private boolean tryRunPendingCleaners() throws 
InterruptedException {
-   return pendingCleanersRunner.tryRunPendingCleaners();
+   return pendingCleanersRunner != null && 
pendingCleanersRunner.tryRunPendingCleaners();
}
 
@Override
@@ -303,32 +302,38 @@ public enum JavaGcCleanerWrapper {
private static class PendingCleanersRunnerProvider {
private static final String REFERENCE_CLASS = 
"java.lang.ref.Reference";
private final String cleanerName;
-   private final ReflectionUtils reflectionUtils;
private final String waitForReferenceProcessingName;
private final Object[] waitForReferenceProcessingArgs;
private final Class[] waitForReferenceProcessingArgTypes;
 
private PendingCleanersRunnerProvider(
String cleanerName,
-   ReflectionUtils reflectionUtils,
String waitForReferenceProcessingName,
Object[] waitForReferenceProcessingArgs,
Class[] waitForReferenceProcessingArgTypes) {
this.cleanerName = cleanerName;
-   this.reflectionUtils = reflectionUtils;
this.waitForReferenceProcessingName = 
waitForReferenceProcessingName;
this.waitForReferenceProcessingArgs = 
waitForReferenc

[flink] branch release-1.11 updated: [FLINK-18646] Verify memory manager empty in a separate thread with larger timeout

2020-07-28 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new bcc9708  [FLINK-18646] Verify memory manager empty in a separate 
thread with larger timeout
bcc9708 is described below

commit bcc97082639280ab14f465463fb07b27167c37e3
Author: Andrey Zagrebin 
AuthorDate: Tue Jul 21 19:17:40 2020 +0300

[FLINK-18646] Verify memory manager empty in a separate thread with larger 
timeout

UnsafeMemoryBudget#verifyEmpty, called on slot freeing, needs time to wait 
on GC of all allocated/released managed memory.
If there are a lot of segments to GC then it can take time to finish the 
check. If slot freeing happens in RPC thread,
the GC waiting can block it and TM risks to miss its heartbeat.

Another problem is that after UnsafeMemoryBudget#RETRIGGER_GC_AFTER_SLEEPS, 
System.gc() is called for each attempt to run a cleaner
even if there are already detected cleaners to run. This leads to 
triggering a lot of unnecessary GCs in background.

The PR offloads the verification into a separate thread and calls 
System.gc() only if memory cannot be reserved and
there are still no cleaners to run after long waiting. The timeout for 
normal memory reservation is increased to 2 second.
The full reservation, used for verification, gets 2 minute timeout.

This closes #12980.
---
 .../flink/runtime/memory/UnsafeMemoryBudget.java   | 41 +++---
 .../runtime/taskexecutor/TaskManagerServices.java  |  9 +++--
 .../flink/runtime/taskexecutor/slot/TaskSlot.java  | 34 +++---
 .../taskexecutor/slot/TaskSlotTableImpl.java   | 13 +--
 .../flink/runtime/memory/MemoryManagerTest.java| 10 ++
 .../runtime/taskexecutor/TaskExecutorTest.java | 16 +++--
 .../runtime/taskexecutor/slot/TaskSlotTest.java|  3 +-
 .../runtime/taskexecutor/slot/TaskSlotUtils.java   |  4 ++-
 8 files changed, 97 insertions(+), 33 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
index a85f40e..8063cd7 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
@@ -35,11 +35,8 @@ import java.util.concurrent.atomic.AtomicLong;
  * continues to process any ready cleaners making {@link #MAX_SLEEPS} attempts 
before throwing {@link OutOfMemoryError}.
  */
 class UnsafeMemoryBudget {
-   // max. number of sleeps during try-reserving with exponentially
-   // increasing delay before throwing OutOfMemoryError:
-   // 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (total 1023 ms ~ 1 s)
-   // which means that MemoryReservationException will be thrown after 1 s 
of trying
-   private static final int MAX_SLEEPS = 10;
+   private static final int MAX_SLEEPS = 11; // 2^11 - 1 = (2 x 1024) - 1 
ms ~ 2 s total sleep duration
+   private static final int MAX_SLEEPS_VERIFY_EMPTY = 17; // 2^17 - 1 = 
(128 x 1024) - 1 ms ~ 2 min total sleep duration
private static final int RETRIGGER_GC_AFTER_SLEEPS = 9; // ~ 0.5 sec
 
private final long totalMemorySize;
@@ -61,7 +58,9 @@ class UnsafeMemoryBudget {
 
boolean verifyEmpty() {
try {
-   reserveMemory(totalMemorySize);
+   // we wait longer than during the normal reserveMemory 
as we have to GC all memory,
+   // allocated by task, to perform the verification
+   reserveMemory(totalMemorySize, MAX_SLEEPS_VERIFY_EMPTY);
} catch (MemoryReservationException e) {
return false;
}
@@ -74,8 +73,26 @@ class UnsafeMemoryBudget {
 *
 * Adjusted version of {@link java.nio.Bits#reserveMemory(long, 
int)} taken from Java 11.
 */
-   @SuppressWarnings({"OverlyComplexMethod", "JavadocReference", 
"NestedTryStatement"})
void reserveMemory(long size) throws MemoryReservationException {
+   reserveMemory(size, MAX_SLEEPS);
+   }
+
+   /**
+* Reserve memory of certain size if it is available.
+*
+* If the method cannot reserve immediately, it tries to process the 
phantom GC cleaners queue by
+* calling {@link JavaGcCleanerWrapper#tryRunPendingCleaners()}. If it 
does not help,
+* the method calls {@link System#gc} and tries again to reserve. If it 
still cannot reserve,
+* it tries to process the phantom GC cleaners queue. If there are no 
cleaners to process,
+* the method sleeps the {@code maxSleeps} number of times, starting 1 
ms and each time doubling
+* the sl

[flink] branch release-1.11 updated: [FLINK-18646] Verify memory manager empty in a separate thread with larger timeout

2020-07-28 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.11 by this push:
 new bcc9708  [FLINK-18646] Verify memory manager empty in a separate 
thread with larger timeout
bcc9708 is described below

commit bcc97082639280ab14f465463fb07b27167c37e3
Author: Andrey Zagrebin 
AuthorDate: Tue Jul 21 19:17:40 2020 +0300

[FLINK-18646] Verify memory manager empty in a separate thread with larger 
timeout

UnsafeMemoryBudget#verifyEmpty, called on slot freeing, needs time to wait 
on GC of all allocated/released managed memory.
If there are a lot of segments to GC then it can take time to finish the 
check. If slot freeing happens in RPC thread,
the GC waiting can block it and TM risks to miss its heartbeat.

Another problem is that after UnsafeMemoryBudget#RETRIGGER_GC_AFTER_SLEEPS, 
System.gc() is called for each attempt to run a cleaner
even if there are already detected cleaners to run. This leads to 
triggering a lot of unnecessary GCs in background.

The PR offloads the verification into a separate thread and calls 
System.gc() only if memory cannot be reserved and
there are still no cleaners to run after long waiting. The timeout for 
normal memory reservation is increased to 2 second.
The full reservation, used for verification, gets 2 minute timeout.

This closes #12980.
---
 .../flink/runtime/memory/UnsafeMemoryBudget.java   | 41 +++---
 .../runtime/taskexecutor/TaskManagerServices.java  |  9 +++--
 .../flink/runtime/taskexecutor/slot/TaskSlot.java  | 34 +++---
 .../taskexecutor/slot/TaskSlotTableImpl.java   | 13 +--
 .../flink/runtime/memory/MemoryManagerTest.java| 10 ++
 .../runtime/taskexecutor/TaskExecutorTest.java | 16 +++--
 .../runtime/taskexecutor/slot/TaskSlotTest.java|  3 +-
 .../runtime/taskexecutor/slot/TaskSlotUtils.java   |  4 ++-
 8 files changed, 97 insertions(+), 33 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
index a85f40e..8063cd7 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/memory/UnsafeMemoryBudget.java
@@ -35,11 +35,8 @@ import java.util.concurrent.atomic.AtomicLong;
  * continues to process any ready cleaners making {@link #MAX_SLEEPS} attempts 
before throwing {@link OutOfMemoryError}.
  */
 class UnsafeMemoryBudget {
-   // max. number of sleeps during try-reserving with exponentially
-   // increasing delay before throwing OutOfMemoryError:
-   // 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (total 1023 ms ~ 1 s)
-   // which means that MemoryReservationException will be thrown after 1 s 
of trying
-   private static final int MAX_SLEEPS = 10;
+   private static final int MAX_SLEEPS = 11; // 2^11 - 1 = (2 x 1024) - 1 
ms ~ 2 s total sleep duration
+   private static final int MAX_SLEEPS_VERIFY_EMPTY = 17; // 2^17 - 1 = 
(128 x 1024) - 1 ms ~ 2 min total sleep duration
private static final int RETRIGGER_GC_AFTER_SLEEPS = 9; // ~ 0.5 sec
 
private final long totalMemorySize;
@@ -61,7 +58,9 @@ class UnsafeMemoryBudget {
 
boolean verifyEmpty() {
try {
-   reserveMemory(totalMemorySize);
+   // we wait longer than during the normal reserveMemory 
as we have to GC all memory,
+   // allocated by task, to perform the verification
+   reserveMemory(totalMemorySize, MAX_SLEEPS_VERIFY_EMPTY);
} catch (MemoryReservationException e) {
return false;
}
@@ -74,8 +73,26 @@ class UnsafeMemoryBudget {
 *
 * Adjusted version of {@link java.nio.Bits#reserveMemory(long, 
int)} taken from Java 11.
 */
-   @SuppressWarnings({"OverlyComplexMethod", "JavadocReference", 
"NestedTryStatement"})
void reserveMemory(long size) throws MemoryReservationException {
+   reserveMemory(size, MAX_SLEEPS);
+   }
+
+   /**
+* Reserve memory of certain size if it is available.
+*
+* If the method cannot reserve immediately, it tries to process the 
phantom GC cleaners queue by
+* calling {@link JavaGcCleanerWrapper#tryRunPendingCleaners()}. If it 
does not help,
+* the method calls {@link System#gc} and tries again to reserve. If it 
still cannot reserve,
+* it tries to process the phantom GC cleaners queue. If there are no 
cleaners to process,
+* the method sleeps the {@code maxSleeps} number of times, starting 1 
ms and each time doubling
+* the sl

[flink] branch master updated (5dccc99 -> 3d056c8)

2020-07-28 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5dccc99  [FLINK-18656][network,metrics] Fix startDelay metric for 
unaligned checkpoints
 add 3d056c8  [FLINK-18646] Verify memory manager empty in a separate 
thread with larger timeout

No new revisions were added by this update.

Summary of changes:
 .../flink/runtime/memory/UnsafeMemoryBudget.java   | 41 +++---
 .../runtime/taskexecutor/TaskManagerServices.java  |  9 +++--
 .../flink/runtime/taskexecutor/slot/TaskSlot.java  | 34 +++---
 .../taskexecutor/slot/TaskSlotTableImpl.java   | 13 +--
 .../flink/runtime/memory/MemoryManagerTest.java| 10 ++
 .../runtime/taskexecutor/TaskExecutorTest.java | 16 +++--
 .../runtime/taskexecutor/slot/TaskSlotTest.java|  3 +-
 .../runtime/taskexecutor/slot/TaskSlotUtils.java   |  4 ++-
 8 files changed, 97 insertions(+), 33 deletions(-)



[flink] branch master updated (5dccc99 -> 3d056c8)

2020-07-28 Thread azagrebin
This is an automated email from the ASF dual-hosted git repository.

azagrebin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 5dccc99  [FLINK-18656][network,metrics] Fix startDelay metric for 
unaligned checkpoints
 add 3d056c8  [FLINK-18646] Verify memory manager empty in a separate 
thread with larger timeout

No new revisions were added by this update.

Summary of changes:
 .../flink/runtime/memory/UnsafeMemoryBudget.java   | 41 +++---
 .../runtime/taskexecutor/TaskManagerServices.java  |  9 +++--
 .../flink/runtime/taskexecutor/slot/TaskSlot.java  | 34 +++---
 .../taskexecutor/slot/TaskSlotTableImpl.java   | 13 +--
 .../flink/runtime/memory/MemoryManagerTest.java| 10 ++
 .../runtime/taskexecutor/TaskExecutorTest.java | 16 +++--
 .../runtime/taskexecutor/slot/TaskSlotTest.java|  3 +-
 .../runtime/taskexecutor/slot/TaskSlotUtils.java   |  4 ++-
 8 files changed, 97 insertions(+), 33 deletions(-)



[flink-statefun] 08/10: [hotfix] [docs] Fix typo in packaging.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit 240139661e737e1a9f9ad6e29967c7beeb24be7d
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 11:02:22 2020 +0200

[hotfix] [docs] Fix typo in packaging.md
---
 docs/deployment-and-operations/packaging.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/deployment-and-operations/packaging.md 
b/docs/deployment-and-operations/packaging.md
index 6e34f1d..6c9e5ae 100644
--- a/docs/deployment-and-operations/packaging.md
+++ b/docs/deployment-and-operations/packaging.md
@@ -50,7 +50,7 @@ COPY module.yaml /opt/statefun/modules/remote/module.yaml
 {% if site.is_stable %}
 
The Flink community is currently waiting for the official Docker images 
to be published to Docker Hub.
-   In the meantime, Ververica has volunteered to make Stateful Function's 
images available via their public registry: 
+   In the meantime, Ververica has volunteered to make Stateful Functions' 
images available via their public registry: 
 

FROM 
ververica/flink-statefun:{{ site.version }}



[flink-statefun] 01/10: [hotfix] [docs] Fix typos in python_walkthrough.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit 7abe0a417d723de8257d4a32a96f9fc858606e84
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 10:15:03 2020 +0200

[hotfix] [docs] Fix typos in python_walkthrough.md

* Fix typos
* Make snippets consistent
* Add link to Python SDK in further work section
---
 docs/getting-started/python_walkthrough.md | 18 ++
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/docs/getting-started/python_walkthrough.md 
b/docs/getting-started/python_walkthrough.md
index cf8ef65..920010d 100644
--- a/docs/getting-started/python_walkthrough.md
+++ b/docs/getting-started/python_walkthrough.md
@@ -170,16 +170,16 @@ from statefun import StatefulFunctions
 functions = StatefulFunctions()
 
 @functions.bind("example/greeter")
-def greet(context, message: GreetRequest):
+def greet(context, greet_request: GreetRequest):
 response = GreetResponse()
-response.name = message.name
-response.greeting = "Hello {}".format(message.name)
+response.name = greet_request.name
+response.greeting = "Hello {}".format(greet_request.name)
 
-egress_message = kafka_egress_record(topic="greetings", key=message.name, 
value=response)
+egress_message = kafka_egress_record(topic="greetings", 
key=greet_request.name, value=response)
 context.pack_and_send_egress("example/greets", egress_message)
 {% endhighlight %} 
 
-For each message, a response is constructed and sent to a kafka topic call 
`greetings` partitioned by `name`.
+For each message, a response is constructed and sent to a Kafka topic called 
`greetings` partitioned by `name`.
 The `egress_message` is sent to a an `egress` named `example/greets`.
 This identifier points to a particular Kafka cluster and is configured on 
deployment below.
 
@@ -211,7 +211,7 @@ For each user, functions can now track how many times they 
have been seen.
 
 {% highlight python %}
 @functions.bind("example/greeter")
-def greet(context, greet_message: GreetRequest):
+def greet(context, greet_request: GreetRequest):
 state = context.state('seen_count').unpack(SeenCount)
 if not state:
 state = SeenCount()
@@ -226,7 +226,7 @@ def greet(context, greet_message: GreetRequest):
 context.pack_and_send_egress("example/greets", egress_message)
 {% endhighlight %}
 
-The state, `seen_count` is always scoped to the current name so it can track 
each user independently.
+The state `seen_count` is always scoped to the current name so it can track 
each user independently.
 
 ## Wiring It All Together
 
@@ -243,7 +243,7 @@ from statefun import RequestReplyHandler
 
 functions = StatefulFunctions()
 
-@functions.bind("walkthrough/greeter")
+@functions.bind("example/greeter")
 def greeter(context, message: GreetRequest):
 pass
 
@@ -362,6 +362,8 @@ docker-compose logs -f event-generator
 This Greeter never forgets a user.
 Try and modify the function so that it will reset the ``seen_count`` for any 
user that spends more than 60 seconds without interacting with the system.
 
+Check out the [Python SDK]({{ site.baseurl }}/sdk/python.html) page for more 
information on how to achieve this.
+
 ## Full Application 
 
 {% highlight python %}



[flink-statefun] 02/10: [hotfix] [docs] Add extension in project-setup.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit d1a1d3107bd03f076b58aaf9299c06099d540c9c
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 10:18:37 2020 +0200

[hotfix] [docs] Add extension in project-setup.md
---
 docs/getting-started/project-setup.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/getting-started/project-setup.md 
b/docs/getting-started/project-setup.md
index b670de1..34fc281 100644
--- a/docs/getting-started/project-setup.md
+++ b/docs/getting-started/project-setup.md
@@ -71,7 +71,7 @@ $ tree statefun-quickstart/
 The project contains four files:
 
 * ``pom.xml``: A pom file with the basic dependencies to start building a 
Stateful Functions application.
-* ``Module``: The entry point for the application.
+* ``Module.java``: The entry point for the application.
 * ``org.apache.flink.statefun.sdk.spi.StatefulFunctionModule``: A service 
entry for the runtime to find the module.
 * ``Dockerfile``: A Dockerfile to quickly build a Stateful Functions image 
ready to deploy.
 



[flink-statefun] 07/10: [hotfix] [docs] Fix typos in logical.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit a6ef6ebf3c2bdc3908f820a9d4e77e50aae34dd8
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 11:02:08 2020 +0200

[hotfix] [docs] Fix typos in logical.md
---
 docs/concepts/logical.md | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/concepts/logical.md b/docs/concepts/logical.md
index f7a462c..4ed47fe 100644
--- a/docs/concepts/logical.md
+++ b/docs/concepts/logical.md
@@ -34,8 +34,8 @@ Users are encouraged to model their applications as 
granularly as possible, base
 ## Function Address
 
 In a local environment, the address of an object is the same as a reference to 
it.
-But in a Stateful Function's application, function instances are virtual and 
their runtime location is not exposed to the user.
-Instead, an ``Address`` is used to reference a specific stateful function in 
the system..
+But in a Stateful Functions application, function instances are virtual and 
their runtime location is not exposed to the user.
+Instead, an ``Address`` is used to reference a specific stateful function in 
the system.
 
 
 
@@ -43,14 +43,14 @@ Instead, an ``Address`` is used to reference a specific 
stateful function in the
 
 An address is made of two components, a ``FunctionType`` and ``ID``.
 A function type is similar to a class in an object-oriented language; it 
declares what sort of function the address references.
-The id is a primary key, which scopes the function call to a specific instance 
of the function type.
+The ID is a primary key, which scopes the function call to a specific instance 
of the function type.
 
 When a function is being invoked, all actions - including reads and writes of 
persisted states - are scoped to the current address.
 
-For example, imagine there was a Stateful Function application to track the 
inventory of a warehouse.
+For example, imagine there was a Stateful Functions application to track the 
inventory of a warehouse.
 One possible implementation could include an ``Inventory`` function that 
tracks the number units in stock for a particular item; this would be the 
function type.
 There would then be one logical instance of this type for each SKU the 
warehouse manages.
-If it were clothing, there might be an instance for shirts and another for 
pants; "shirt" and "pant" would be two ids.
+If it were clothing, there might be an instance for shirts and another for 
pants; "shirt" and "pant" would be two IDs.
 Each instance may be interacted with and messaged independently.
 The application is free to create as many instances as there are types of 
items in inventory.
 



[flink-statefun] 03/10: [hotfix] [docs] Make differences sub sections in application-building-blocks.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit 65b83ddb504dc0ac6e56bb761f6430a7d8f5511f
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 10:13:21 2020 +0200

[hotfix] [docs] Make differences sub sections in 
application-building-blocks.md

These sections are treated as bullet points about stateful functions. Thus, 
it
feels more natural to have them as subsections.
---
 docs/concepts/application-building-blocks.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/concepts/application-building-blocks.md 
b/docs/concepts/application-building-blocks.md
index 96a8461..e2861df 100644
--- a/docs/concepts/application-building-blocks.md
+++ b/docs/concepts/application-building-blocks.md
@@ -55,7 +55,7 @@ Instead of building up a static dataflow DAG, these functions 
can communicate wi
 If you are familiar with actor programming, this does share certain 
similarities in its ability to dynamically message between components.
 However, there are a number of significant differences.
 
-## Persisted States
+### Persisted States
 
 The first is that all functions have locally embedded state, known as 
persisted states.
 
@@ -66,7 +66,7 @@ The first is that all functions have locally embedded state, 
known as persisted
 One of Apache Flink's core strengths is its ability to provide fault-tolerant 
local state.
 When inside a function, while it is performing some computation, you are 
always working with local state in local variables.
 
-## Fault Tolerance
+### Fault Tolerance
 
 For both state and messaging, Stateful Function's is still able to provide the 
exactly-once guarantees users expect from a modern data processessing framework.
 
@@ -78,7 +78,7 @@ In the case of failure, the entire state of the world (both 
persisted states and
 
 These guarantees are provided with no database required, instead Stateful 
Function's leverages Apache Flink's proven snapshotting mechanism.
 
-## Event Egress
+### Event Egress
 
 Finally, applications can output data to external systems via event egress's.
 



[flink-statefun] 10/10: [hotfix] [docs] Fix typo in modules.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit d22a4484584a3eac3b4437baee00f523308183b5
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 11:02:41 2020 +0200

[hotfix] [docs] Fix typo in modules.md
---
 docs/sdk/modules.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/sdk/modules.md b/docs/sdk/modules.md
index 5a33482..bea3aa6 100644
--- a/docs/sdk/modules.md
+++ b/docs/sdk/modules.md
@@ -24,7 +24,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Stateful Function applications are composed of one or more ``Modules``.
+Stateful Functions applications are composed of one or more modules.
 A module is a bundle of functions that are loaded by the runtime and available 
to be messaged.
 Functions from all loaded modules are multiplexed and free to message each 
other arbitrarily.
 
@@ -72,7 +72,7 @@ org.apache.flink.statefun.docs.BasicFunctionModule
 
 Remote modules are run as external processes from the Apache Flink® runtime; 
in the same container, as a sidecar, or other external location.
 
-This module type can support any number of language SDK's.
+This module type can support any number of language SDKs.
 Remote modules are registered with the system via ``YAML`` configuration files.
 
 ### Specification



[flink-statefun] 05/10: [hotfix] [docs] Fix typos in java_walkthrough.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit ea7c3f690d5f5f9dad3b9abc1a6d8f5c4c5c7c7f
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 10:31:42 2020 +0200

[hotfix] [docs] Fix typos in java_walkthrough.md

* Fix typos
* Add link to Java SDK in further work section
---
 docs/getting-started/java_walkthrough.md | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/docs/getting-started/java_walkthrough.md 
b/docs/getting-started/java_walkthrough.md
index cec130a..9f650ce 100644
--- a/docs/getting-started/java_walkthrough.md
+++ b/docs/getting-started/java_walkthrough.md
@@ -93,18 +93,19 @@ private static String greetText(String name, int seen) {
 return String.format("Happy to see you once again %s !", name);
 default:
 return String.format("Hello at the %d-th time %s", seen + 1, name);
+}
 }
 {% endhighlight %}
 
 ## Routing Messages
 
-To send a user a personalized greeting, the system needs to keep track of how 
many times it has seen each user so far.
+To send a personalized greeting to a user, the system needs to keep track of 
how many times it has seen each user so far.
 Speaking in general terms, the simplest solution would be to create one 
function for every user and independently track the number of times they have 
been seen. Using most frameworks, this would be prohibitively expensive.
 However, stateful functions are virtual and do not consume any CPU or memory 
when not actively being invoked.
 That means your application can create as many functions as necessary — in 
this case, users — without worrying about resource consumption.
 
 Whenever data is consumed from an external system (or [ingress]({{ 
site.baseurl }}/io-module/index.html#ingress)), it is routed to a specific 
function based on a given function type and identifier.
-The function type represents the Class of function to be invoked, such as the 
Greeter function, while the identifier (``GreetRequest#getWho``) scopes the 
call to a specific virtual instance based on some key.
+The function type represents the class of function to be invoked, such as the 
Greeter function, while the identifier (``GreetRequest#getWho``) scopes the 
call to a specific virtual instance based on some key.
 
 {% highlight java %}
 package org.apache.flink.statefun.examples.greeter;
@@ -200,3 +201,5 @@ docker-compose exec kafka-broker kafka-console-consumer.sh \
 
 This Greeter never forgets a user.
 Try and modify the function so that it will reset the ``count`` for any user 
that spends more than 60 seconds without interacting with the system.
+
+Check out the [Java SDK]({{ site.baseurl }}/sdk/java.html) page for more 
information on how to achieve this.



[flink-statefun] 09/10: [hotfix] [docs] Fix typo in java.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit 5d15efcdd92d078941d5d8504a8aabb641109bca
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 11:02:32 2020 +0200

[hotfix] [docs] Fix typo in java.md
---
 docs/sdk/java.md | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/docs/sdk/java.md b/docs/sdk/java.md
index 05e7192..b5903f5 100644
--- a/docs/sdk/java.md
+++ b/docs/sdk/java.md
@@ -27,7 +27,7 @@ under the License.
 Stateful functions are the building blocks of applications; they are atomic 
units of isolation, distribution, and persistence.
 As objects, they encapsulate the state of a single entity (e.g., a specific 
user, device, or session) and encode its behavior.
 Stateful functions can interact with each other, and external systems, through 
message passing.
-The Java SDK is supported as an [embedded_module]({{ site.baseurl 
}}/sdk/modules.html#embedded-module).
+The Java SDK is supported as an [embedded module]({{ site.baseurl 
}}/sdk/modules.html#embedded-module).
 
 To get started, add the Java SDK as a dependency to your application.
 
@@ -96,16 +96,16 @@ public class FnMatchGreeter extends StatefulMatchFunction {
.predicate(Employee.class, this::greetEmployee);
}
 
-   private void greetManager(Context context, Employee message) {
-   System.out.println("Hello manager " + message.getEmployeeId());
+   private void greetCustomer(Context context, Customer message) {
+   System.out.println("Hello customer " + message.getName());
}
 
private void greetEmployee(Context context, Employee message) {
System.out.println("Hello employee " + message.getEmployeeId());
}
 
-   private void greetCustomer(Context context, Customer message) {
-   System.out.println("Hello customer " + message.getName());
+   private void greetManager(Context context, Employee message) {
+   System.out.println("Hello manager " + message.getEmployeeId());
}
 }
 {% endhighlight %}
@@ -134,20 +134,20 @@ public class FnMatchGreeterWithCatchAll extends 
StatefulMatchFunction {
.otherwise(this::catchAll);
}
 
-   private void catchAll(Context context, Object message) {
-   System.out.println("Hello unexpected message");
+   private void greetCustomer(Context context, Customer message) {
+   System.out.println("Hello customer " + message.getName());
}
 
-   private void greetManager(Context context, Employee message) {
-   System.out.println("Hello manager");
+   private void greetEmployee(Context context, Employee message) {
+   System.out.println("Hello employee " + message.getEmployeeId());
}
 
-   private void greetEmployee(Context context, Employee message) {
-   System.out.println("Hello employee");
+   private void greetManager(Context context, Employee message) {
+   System.out.println("Hello manager " + message.getEmployeeId());
}
 
-   private void greetCustomer(Context context, Customer message) {
-   System.out.println("Hello customer");
+   private void catchAll(Context context, Object message) {
+   System.out.println("Hello unexpected message");
}
 }
 {% endhighlight %}
@@ -162,7 +162,7 @@ Finally, if a catch-all exists, it will be executed or an 
``IllegalStateExceptio
 
 ## Function Types and Messaging
 
-In Java, function types are defined as a _stringly_ typed reference containing 
a namespace and name.
+In Java, function types are defined as logical pointers composed of a 
namespace and name.
 The type is bound to the implementing class in the [module]({{ site.baseurl 
}}/sdk/modules.html#embedded-module) definition.
 Below is an example function type for the hello world function.
 
@@ -228,7 +228,7 @@ public class FnDelayedMessage implements StatefulFunction {
 ## Completing Async Requests
 
 When interacting with external systems, such as a database or API, one needs 
to take care that communication delay with the external system does not 
dominate the application’s total work.
-Stateful Functions allows registering a java ``CompletableFuture`` that will 
resolve to a value at some point in the future.
+Stateful Functions allows registering a Java ``CompletableFuture`` that will 
resolve to a value at some point in the future.
 Future's are registered along with a metadata object that provides additional 
context about the caller.
 
 When the future completes, either successfully or exceptionally, the caller 
function type and id will be invoked with a ``AsyncOperationResult``.
@@ -303,7 +303,7 @@ The data is always scoped to a specific function type and 
identifier.
 Below is a stateful functio

[flink-statefun] 06/10: [hotfix] [docs] Fix typos in distributed_architecture.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit e8753886c50eac889ff86df34c67c68138c7caa4
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 11:01:51 2020 +0200

[hotfix] [docs] Fix typos in distributed_architecture.md
---
 docs/concepts/distributed_architecture.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/concepts/distributed_architecture.md 
b/docs/concepts/distributed_architecture.md
index 85b8496..8301e07 100755
--- a/docs/concepts/distributed_architecture.md
+++ b/docs/concepts/distributed_architecture.md
@@ -52,7 +52,7 @@ In addition to the Apache Flink processes, a full deployment 
requires [ZooKeeper
 
 ## Logical Co-location, Physical Separation
 
-A core principle of many Stream Processors is that application logic and the 
application state must be co-located. That approach is the basis for their 
out-of-the box consistency. Stateful Function takes a unique approach to that 
by *logically co-locating* state and compute, but allowing to *physically 
separate* them.
+A core principle of many Stream Processors is that application logic and the 
application state must be co-located. That approach is the basis for their 
out-of-the box consistency. Stateful Functions takes a unique approach to that 
by *logically co-locating* state and compute, but allowing to *physically 
separate* them.
 
   - *Logical co-location:* Messaging, state access/updates and function 
invocations are managed tightly together, in the same way as in Flink's 
DataStream API. State is sharded by key, and messages are routed to the state 
by key. There is a single writer per key at a time, also scheduling the 
function invocations.
 
@@ -67,7 +67,7 @@ The stateful functions themselves can be deployed in various 
ways that trade off
 
 *Remote Functions* use the above-mentioned principle of *physical separation* 
while maintaining *logical co-location*. The state/messaging tier (i.e., the 
Flink processes) and the function tier are deployed, managed, and scaled 
independently.
 
-Function invocations happen through an HTTP / gRPC protocol and go through a 
service that routes invocation requests to any available endpoint, for example 
a Kubernetes (load-balancing) service, the AWS request gateway for Lambda, etc. 
Because invocations are self-contained (contain message, state, access to 
timers, etc.) the target functions can treated like any stateless application.
+Function invocations happen through an HTTP / gRPC protocol and go through a 
service that routes invocation requests to any available endpoint, for example 
a Kubernetes (load-balancing) service, the AWS request gateway for Lambda, etc. 
Because invocations are self-contained (contain message, state, access to 
timers, etc.) the target functions can be treated like any stateless 
application.
 
 




[flink-statefun] 04/10: [hotfix] [docs] Fix typos in application-building-blocks.md

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git

commit e8e5b347e7d6ee52c15726de319419650c8dd1a3
Author: Ufuk Celebi 
AuthorDate: Tue Jun 30 10:25:15 2020 +0200

[hotfix] [docs] Fix typos in application-building-blocks.md

* Plural of egress seems to be egresses 
(https://en.wiktionary.org/wiki/egress)
---
 docs/concepts/application-building-blocks.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/concepts/application-building-blocks.md 
b/docs/concepts/application-building-blocks.md
index e2861df..81d7628 100644
--- a/docs/concepts/application-building-blocks.md
+++ b/docs/concepts/application-building-blocks.md
@@ -68,7 +68,7 @@ When inside a function, while it is performing some 
computation, you are always
 
 ### Fault Tolerance
 
-For both state and messaging, Stateful Function's is still able to provide the 
exactly-once guarantees users expect from a modern data processessing framework.
+For both state and messaging, Stateful Functions is able to provide the 
exactly-once guarantees users expect from a modern data processessing framework.
 
 
 
@@ -80,7 +80,7 @@ These guarantees are provided with no database required, 
instead Stateful Functi
 
 ### Event Egress
 
-Finally, applications can output data to external systems via event egress's.
+Finally, applications can output data to external systems via event egresses.
 
 
 



[flink-statefun] branch master updated (289c30e -> d22a448)

2020-07-28 Thread tzulitai
This is an automated email from the ASF dual-hosted git repository.

tzulitai pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink-statefun.git.


from 289c30e  [FLINK-17954] [core] Demux legacy remote function state on 
restore
 new 7abe0a4  [hotfix] [docs] Fix typos in python_walkthrough.md
 new d1a1d31  [hotfix] [docs] Add extension in project-setup.md
 new 65b83dd  [hotfix] [docs] Make differences sub sections in 
application-building-blocks.md
 new e8e5b34  [hotfix] [docs] Fix typos in application-building-blocks.md
 new ea7c3f6  [hotfix] [docs] Fix typos in java_walkthrough.md
 new e875388  [hotfix] [docs] Fix typos in distributed_architecture.md
 new a6ef6eb  [hotfix] [docs] Fix typos in logical.md
 new 2401396  [hotfix] [docs] Fix typo in packaging.md
 new 5d15efc  [hotfix] [docs] Fix typo in java.md
 new d22a448  [hotfix] [docs] Fix typo in modules.md

The 10 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/concepts/application-building-blocks.md | 10 
 docs/concepts/distributed_architecture.md|  4 ++--
 docs/concepts/logical.md | 10 
 docs/deployment-and-operations/packaging.md  |  2 +-
 docs/getting-started/java_walkthrough.md |  7 --
 docs/getting-started/project-setup.md|  2 +-
 docs/getting-started/python_walkthrough.md   | 18 +++---
 docs/sdk/java.md | 36 ++--
 docs/sdk/modules.md  |  4 ++--
 9 files changed, 49 insertions(+), 44 deletions(-)