[flink] branch master updated (592fba6 -> d1d7853)

2019-10-09 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 592fba6  [FLINK-13360][docs] Add documentation for HBase connector for 
Table API & SQL
 add d1d7853  [FLINK-13361][docs] Add documentation for JDBC connector for 
Table API & SQL

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/connect.md | 137 ++
 1 file changed, 137 insertions(+)



[flink] branch release-1.9 updated: [FLINK-13361][docs] Add documentation for JDBC connector for Table API & SQL

2019-10-09 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new f09ff5e  [FLINK-13361][docs] Add documentation for JDBC connector for 
Table API & SQL
f09ff5e is described below

commit f09ff5eb6d1e01ea77e87c6b8ba9d5752d492444
Author: JingsongLi 
AuthorDate: Thu Oct 10 12:19:33 2019 +0800

[FLINK-13361][docs] Add documentation for JDBC connector for Table API & SQL

This closes #9802
---
 docs/dev/table/connect.md | 137 ++
 1 file changed, 137 insertions(+)

diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md
index 860e7e2..f4f73e0 100644
--- a/docs/dev/table/connect.md
+++ b/docs/dev/table/connect.md
@@ -50,6 +50,7 @@ The following tables list all available connectors and 
formats. Their mutual com
 | Apache Kafka  | 0.11| `flink-connector-kafka-0.11` | 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
 | Apache Kafka  | 0.11+ (`universal`) | `flink-connector-kafka`  | 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
 | HBase | 1.4.3   | `flink-hbase`| 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-hbase{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
+| JDBC  | | `flink-jdbc` | 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-jdbc{{site.scala_version_suffix}}/{{site.version}}/flink-jdbc{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
 
 ### Formats
 
@@ -1160,6 +1161,142 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### JDBC Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The JDBC connector allows for reading from and writing into an JDBC client.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+To use JDBC connector, need to choose an actual driver to use. Here are 
drivers currently supported:
+
+**Supported Drivers:**
+
+| Name|  Group Id  |  Artifact Id |  JAR |
+| :---| :--| :| :|
+| MySQL   |mysql   | mysql-connector-java | 
[Download](http://central.maven.org/maven2/mysql/mysql-connector-java/) |
+| PostgreSQL  |   org.postgresql   |  postgresql  | 
[Download](https://jdbc.postgresql.org/download.html) |
+| Derby   |  org.apache.derby  |derby | 
[Download](http://db.apache.org/derby/derby_downloads.html) |
+
+
+
+The connector can be defined as follows:
+
+
+
+{% highlight yaml %}
+connector:
+  type: jdbc
+  url: "jdbc:mysql://localhost:3306/flink-test" # required: JDBC DB url
+  table: "jdbc_table_name"# required: jdbc table name
+  driver: "com.mysql.jdbc.Driver" # optional: the class name of the JDBC 
driver to use to connect to this URL.
+  # If not set, it will automatically be 
derived from the URL.
+
+  username: "name"# optional: jdbc user name and password
+  password: "password"
+  
+  read: # scan options, optional, used when reading from table
+partition: # These options must all be specified if any of them is 
specified. In addition, partition.num must be specified. They
+   # describe how to partition the table when reading in parallel 
from multiple tasks. partition.column must be a numeric,
+   # date, or timestamp column from the table in question. Notice 
that lowerBound and upperBound are just used to decide
+   # the partition stride, not for filtering the rows in table. So 
all rows in the table will be partitioned and returned.
+   # This option applies only to reading.
+  column: "column_name" # optional, name of the column used for 
partitioning the input.
+  num: 50   # optional, the number of partitions.
+  lower-bound: 500  # optional, the smallest value of the first 
partition.
+  upper-bound: 1000 # optional, the largest value of the last 
partition.
+fetch-size: 100

[flink] branch release-1.9 updated: [FLINK-13360][docs] Add documentation for HBase connector for Table API & SQL

2019-10-09 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 42027a4  [FLINK-13360][docs] Add documentation for HBase connector for 
Table API & SQL
42027a4 is described below

commit 42027a4d9572d329d64f684d7e393ace7b6bd799
Author: JingsongLi 
AuthorDate: Sun Sep 29 14:43:16 2019 +0800

[FLINK-13360][docs] Add documentation for HBase connector for Table API & 
SQL

This closes #9799
---
 docs/dev/table/connect.md | 85 +++
 1 file changed, 85 insertions(+)

diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md
index 5378ae9..860e7e2 100644
--- a/docs/dev/table/connect.md
+++ b/docs/dev/table/connect.md
@@ -49,6 +49,7 @@ The following tables list all available connectors and 
formats. Their mutual com
 | Apache Kafka  | 0.10| `flink-connector-kafka-0.10` | 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.10{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.10{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
 | Apache Kafka  | 0.11| `flink-connector-kafka-0.11` | 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka-0.11{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
 | Apache Kafka  | 0.11+ (`universal`) | `flink-connector-kafka`  | 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-kafka{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-kafka{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
+| HBase | 1.4.3   | `flink-hbase`| 
[Download](http://central.maven.org/maven2/org/apache/flink/flink-hbase{{site.scala_version_suffix}}/{{site.version}}/flink-hbase{{site.scala_version_suffix}}-{{site.version}}.jar)
 |
 
 ### Formats
 
@@ -1075,6 +1076,90 @@ CREATE TABLE MyUserTable (
 
 {% top %}
 
+### HBase Connector
+
+Source: Batch
+Sink: Batch
+Sink: Streaming Append Mode
+Sink: Streaming Upsert Mode
+Temporal Join: Sync Mode
+
+The HBase connector allows for reading from and writing to an HBase cluster.
+
+The connector can operate in [upsert mode](#update-modes) for exchanging 
UPSERT/DELETE messages with the external system using a [key defined by the 
query](./streaming/dynamic_tables.html#table-to-stream-conversion).
+
+For append-only queries, the connector can also operate in [append 
mode](#update-modes) for exchanging only INSERT messages with the external 
system.
+
+The connector can be defined as follows:
+
+
+
+{% highlight yaml %}
+connector:
+  type: hbase
+  version: "1.4.3" # required: currently only support "1.4.3"
+  
+  table-name: "hbase_table_name" # required: HBase table name
+  
+  zookeeper:
+quorum: "localhost:2181" # required: HBase Zookeeper quorum 
configuration
+znode.parent: "/test"# optional: the root dir in Zookeeper for 
HBase cluster.
+ # The default value is "/hbase".
+  
+  write.buffer-flush:
+max-size: "10mb" # optional: writing option, determines how 
many size in memory of buffered
+ # rows to insert per round trip. This can 
help performance on writing to JDBC
+ # database. The default value is "2mb".
+max-rows: 1000   # optional: writing option, determines how 
many rows to insert per round trip.
+ # This can help performance on writing to 
JDBC database. No default value,
+ # i.e. the default flushing is not depends on 
the number of buffered rows.
+interval: "2s"   # optional: writing option, sets a flush 
interval flushing buffered requesting
+ # if the interval passes, in milliseconds. 
Default value is "0s", which means
+ # no asynchronous flush thread will be 
scheduled.
+{% endhighlight %}
+
+
+
+{% highlight sql %}
+CREATE TABLE MyUserTable (
+  hbase_rowkey_name rowkey_type,
+  hbase_column_family_name1 ROW<...>,
+  hbase_column_family_name2 ROW<...>
+) WITH (
+  'connector.type' = 'hbase', -- required: specify this table type is hbase
+  
+  'connector.version' = '1.4.3',  -- required: valid connector 
versions are "1.4.3"
+  
+  'connector.table-name' = 'hbase_table_name',  -- required: hbase table name
+  
+  'connector.zookeeper.quorum' = 'localhost:2181', -- required: HBase 
Zookeeper quorum configuration
+  'connector.zookeeper.znode.parent' = '/test',-- optional: the root dir 
in Zookeeper for HBase cluster.
+   

[flink] branch master updated (df44cce -> 592fba6)

2019-10-09 Thread jark
This is an automated email from the ASF dual-hosted git repository.

jark pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from df44cce  [FLINK-14306][python] Add the code-generated 
flink_fn_execution_pb2.py to the source code
 add 592fba6  [FLINK-13360][docs] Add documentation for HBase connector for 
Table API & SQL

No new revisions were added by this update.

Summary of changes:
 docs/dev/table/connect.md | 85 +++
 1 file changed, 85 insertions(+)



[flink-web] branch asf-site updated: Rebuild website

2019-10-09 Thread thw
This is an automated email from the ASF dual-hosted git repository.

thw pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d2ee4f0  Rebuild website
d2ee4f0 is described below

commit d2ee4f0cc63408b86b029019df30b260fb2ff967
Author: Thomas Weise 
AuthorDate: Wed Oct 9 11:50:41 2019 -0700

Rebuild website
---
 content/2019/05/03/pulsar-flink.html   | 3 +++
 content/2019/05/14/temporal-tables.html| 3 +++
 content/2019/05/19/state-ttl.html  | 3 +++
 content/2019/06/05/flink-network-stack.html| 3 +++
 content/2019/06/26/broadcast-state.html| 3 +++
 content/2019/07/23/flink-network-stack-2.html  | 3 +++
 content/blog/index.html| 3 +++
 content/blog/page2/index.html  | 3 +++
 content/blog/page3/index.html  | 3 +++
 content/blog/page4/index.html  | 3 +++
 content/blog/page5/index.html  | 3 +++
 content/blog/page6/index.html  | 3 +++
 content/blog/page7/index.html  | 3 +++
 content/blog/page8/index.html  | 3 +++
 content/blog/page9/index.html  | 3 +++
 content/blog/release_1.0.0-changelog_known_issues.html | 3 +++
 content/blog/release_1.1.0-changelog.html  | 3 +++
 content/blog/release_1.2.0-changelog.html  | 3 +++
 content/blog/release_1.3.0-changelog.html  | 3 +++
 content/community.html | 3 +++
 content/contributing/code-style-and-quality-common.html| 3 +++
 content/contributing/code-style-and-quality-components.html| 3 +++
 content/contributing/code-style-and-quality-formatting.html| 3 +++
 content/contributing/code-style-and-quality-java.html  | 3 +++
 content/contributing/code-style-and-quality-preamble.html  | 3 +++
 content/contributing/code-style-and-quality-pull-requests.html | 3 +++
 content/contributing/code-style-and-quality-scala.html | 3 +++
 content/contributing/contribute-code.html  | 3 +++
 content/contributing/contribute-documentation.html | 3 +++
 content/contributing/how-to-contribute.html| 3 +++
 content/contributing/improve-website.html  | 3 +++
 content/contributing/reviewing-prs.html| 3 +++
 content/documentation.html | 3 +++
 content/downloads.html | 3 +++
 content/ecosystem.html | 7 +++
 content/faq.html   | 3 +++
 content/feature/2019/09/13/state-processor-api.html| 3 +++
 content/features/2017/07/04/flink-rescalable-state.html| 3 +++
 content/features/2018/01/30/incremental-checkpointing.html | 3 +++
 .../features/2018/03/01/end-to-end-exactly-once-apache-flink.html  | 3 +++
 content/features/2019/03/11/prometheus-monitoring.html | 3 +++
 content/flink-applications.html| 3 +++
 content/flink-architecture.html| 3 +++
 content/flink-operations.html  | 3 +++
 content/gettinghelp.html   | 3 +++
 content/index.html | 3 +++
 content/material.html  | 3 +++
 content/news/2014/08/26/release-0.6.html   | 3 +++
 content/news/2014/09/26/release-0.6.1.html | 3 +++
 content/news/2014/10/03/upcoming_events.html   | 3 +++
 content/news/2014/11/04/release-0.7.0.html | 3 +++
 content/news/2014/11/18/hadoop-compatibility.html  | 3 +++
 content/news/2015/01/06/december-in-flink.html | 3 +++
 content/news/2015/01/21/release-0.8.html   | 3 +++
 content/news/2015/02/04/january-in-flink.html  | 3 +++
 content/news/2015/02/09/streaming-example.html | 3 +++
 content/news/2015/03/02/february-2015-in-flink.html| 3 +++
 .../news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html| 3 +++
 content/news/2015/04/07/march-in-flink.html| 3 +++
 content/news/2015/04/13/release-0.9.0-milestone1.html

[flink] branch master updated (0d112f5 -> df44cce)

2019-10-09 Thread hequn
This is an automated email from the ASF dual-hosted git repository.

hequn pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0d112f5  [FLINK-14300][runtime] Cleanup operator threads in case 
StreamTask fails to allocate operatorChain (#9857)
 add df44cce  [FLINK-14306][python] Add the code-generated 
flink_fn_execution_pb2.py to the source code

No new revisions were added by this update.

Summary of changes:
 .gitignore |   1 -
 flink-python/README.md |  13 +
 flink-python/pom.xml   |  20 -
 .../pyflink/fn_execution/flink_fn_execution_pb2.py | 509 +
 .../tests/test_flink_fn_execution_pb2_synced.py|  43 ++
 flink-python/pyflink/gen_protos.py |  70 ++-
 .../pyflink/proto/flink-fn-execution.proto |   2 +
 flink-python/pyflink/testing/test_case_utils.py|   3 -
 flink-python/setup.py  |  36 +-
 pom.xml|   1 -
 10 files changed, 623 insertions(+), 75 deletions(-)
 create mode 100644 flink-python/pyflink/fn_execution/flink_fn_execution_pb2.py
 create mode 100644 
flink-python/pyflink/fn_execution/tests/test_flink_fn_execution_pb2_synced.py



[flink] branch master updated (71dfb4c -> c910d71)

2019-10-09 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 71dfb4c  [FLINK-14335][docs] Fix ExampleIntegrationTest example
 add c910d71  [hotfix] Update README

No new revisions were added by this update.

Summary of changes:
 README.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)



[flink-web] branch asf-site updated: Add FlinkK8sOperator to ecosystem page, link page from navbar

2019-10-09 Thread mxm
This is an automated email from the ASF dual-hosted git repository.

mxm pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 19d9140  Add FlinkK8sOperator to ecosystem page, link page from navbar
19d9140 is described below

commit 19d9140dfd86a3c21f13ca70d60db09589744f35
Author: Thomas Weise 
AuthorDate: Mon Oct 7 10:52:49 2019 -0700

Add FlinkK8sOperator to ecosystem page, link page from navbar
---
 _data/i18n.yml| 2 ++
 _includes/navbar.html | 3 +++
 ecosystem.md  | 4 
 3 files changed, 9 insertions(+)

diff --git a/_data/i18n.yml b/_data/i18n.yml
index c6365c6..9db8599 100644
--- a/_data/i18n.yml
+++ b/_data/i18n.yml
@@ -10,6 +10,7 @@ en:
 tutorials: Tutorials
 documentation: Documentation
 getting_help: Getting Help
+ecosystem: Ecosystem
 flink_blog: Flink Blog
 community_project: Community  Project Info
 how_to_contribute: How to Contribute
@@ -32,6 +33,7 @@ zh:
 tutorials: 教程
 documentation: 文档
 getting_help: 获取帮助
+ecosystem: Ecosystem
 flink_blog: Flink 博客
 community_project: 社区  项目信息
 how_to_contribute: 如何参与贡献
diff --git a/_includes/navbar.html b/_includes/navbar.html
index 148332a..71539bd 100755
--- a/_includes/navbar.html
+++ b/_includes/navbar.html
@@ -83,6 +83,9 @@
 
 {{ site.data.i18n[page.language].flink_blog 
}}
 
+
+{{ 
site.data.i18n[page.language].ecosystem }}
+
 
 
 
diff --git a/ecosystem.md b/ecosystem.md
index fa2a362..96ef61c 100644
--- a/ecosystem.md
+++ b/ecosystem.md
@@ -100,3 +100,7 @@ A small [WordCount 
example](https://github.com/mjsax/flink-external/tree/master/
 **Tink temporal graph library**
 
 [Tink](https://github.com/otherwise777/Temporal_Graph_library) is a temporal 
graph library built on top of Flink. It allows for temporal graph analytics 
like different interpretations of the shortest temporal path algorithm and 
metrics like temporal betweenness and temporal closeness. This library was the 
result of the [Thesis](http://www.win.tue.nl/~gfletche/ligtenberg2017.pdf) of 
Wouter Ligtenberg.
+
+**FlinkK8sOperator**
+
+[FlinkK8sOperator](https://github.com/lyft/flinkk8soperator) is a Kubernetes 
operator that manages Flink applications on Kubernetes. The operator acts as 
control plane to manage the complete deployment lifecycle of the application.



[flink] branch release-1.9 updated: [FLINK-14335][docs] Fix ExampleIntegrationTest example

2019-10-09 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 46637f7  [FLINK-14335][docs] Fix ExampleIntegrationTest example
46637f7 is described below

commit 46637f790af74e1a9368975067dcd179d65315df
Author: Yangze Guo 
AuthorDate: Wed Oct 9 16:26:51 2019 +0800

[FLINK-14335][docs] Fix ExampleIntegrationTest example

- java version wasn't compiling due to missing ';'
- java version was checking order of the elements which cannot be guaranteed
- examples where checking for the wrong results
---
 docs/dev/stream/testing.md| 12 ++--
 docs/dev/stream/testing.zh.md | 12 ++--
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/docs/dev/stream/testing.md b/docs/dev/stream/testing.md
index 8992d50..2bd0096 100644
--- a/docs/dev/stream/testing.md
+++ b/docs/dev/stream/testing.md
@@ -44,7 +44,7 @@ public class IncrementMapFunction implements 
MapFunction {
 
 @Override
 public Long map(Long record) throws Exception {
-return record +1 ;
+return record + 1;
 }
 }
 {% endhighlight %}
@@ -112,7 +112,7 @@ public class IncrementFlatMapFunctionTest {
 Collector collector = mock(Collector.class);
 
 // call the methods that you have implemented
-incrementer.flatMap(2L, collector)
+incrementer.flatMap(2L, collector);
 
 //verify collector was called with the right output
 Mockito.verify(collector, times(1)).collect(3L);
@@ -216,7 +216,7 @@ public class StatefulFlatMapTest {
 testHarness.setProcessingTime(100L);
 
 //retrieve list of emitted records for assertions
-assertThat(testHarness.getOutput(), containsInExactlyThisOrder(3L))
+assertThat(testHarness.getOutput(), containsInExactlyThisOrder(3L));
 
 //retrieve list of records emitted to a specific side output for 
assertions (ProcessFunction only)
 //assertThat(testHarness.getSideOutput(new 
OutputTag<>("invalidRecords")), hasSize(0))
@@ -358,7 +358,7 @@ public class IncrementMapFunction implements 
MapFunction {
 
 @Override
 public Long map(Long record) throws Exception {
-return record +1 ;
+return record + 1;
 }
 }
 {% endhighlight %}
@@ -410,7 +410,7 @@ public class ExampleIntegrationTest {
 env.execute();
 
 // verify your results
-assertEquals(Lists.newArrayList(2L, 42L, 44L), CollectSink.values);
+assertTrue(CollectSink.values.containsAll(2L, 22L, 23L));
 }
 
 // create a testing sink
@@ -465,7 +465,7 @@ class StreamingJobIntegrationTest extends FlatSpec with 
Matchers with BeforeAndA
 env.execute()
 
 // verify your results
-CollectSink.values should contain allOf (1,22,23)
+CollectSink.values should contain allOf (2, 22, 23)
 }
 }
 // create a testing sink
diff --git a/docs/dev/stream/testing.zh.md b/docs/dev/stream/testing.zh.md
index 01c7808..2264ec4 100644
--- a/docs/dev/stream/testing.zh.md
+++ b/docs/dev/stream/testing.zh.md
@@ -44,7 +44,7 @@ public class IncrementMapFunction implements 
MapFunction {
 
 @Override
 public Long map(Long record) throws Exception {
-return record +1 ;
+return record + 1;
 }
 }
 {% endhighlight %}
@@ -112,7 +112,7 @@ public class IncrementFlatMapFunctionTest {
 Collector collector = mock(Collector.class);
 
 // call the methods that you have implemented
-incrementer.flatMap(2L, collector)
+incrementer.flatMap(2L, collector);
 
 //verify collector was called with the right output
 Mockito.verify(collector, times(1)).collect(3L);
@@ -216,7 +216,7 @@ public class StatefulFlatMapTest {
 testHarness.setProcessingTime(100L);
 
 //retrieve list of emitted records for assertions
-assertThat(testHarness.getOutput(), containsInExactlyThisOrder(3L))
+assertThat(testHarness.getOutput(), containsInExactlyThisOrder(3L));
 
 //retrieve list of records emitted to a specific side output for 
assertions (ProcessFunction only)
 //assertThat(testHarness.getSideOutput(new 
OutputTag<>("invalidRecords")), hasSize(0))
@@ -358,7 +358,7 @@ public class IncrementMapFunction implements 
MapFunction {
 
 @Override
 public Long map(Long record) throws Exception {
-return record +1 ;
+return record + 1;
 }
 }
 {% endhighlight %}
@@ -410,7 +410,7 @@ public class ExampleIntegrationTest {
 env.execute();
 
 // verify your results
-assertEquals(Lists.newArrayList(2L, 42L, 44L), CollectSink.values);
+assertTrue(CollectSink.values.containsAll(2L, 22L, 23L));
 }
 
 // create a testing sink
@@ -465,7 +465,7 @@ class StreamingJobIntegrationTest extends FlatSpec with 
Matchers with BeforeAndA
 env.execute()