[GitHub] [flink] asfgit closed pull request #9195: [FLINK-13289][table-planner-blink] Blink-planner should setKeyFields to upsert table sink

2019-07-25 Thread GitBox
asfgit closed pull request #9195: [FLINK-13289][table-planner-blink] 
Blink-planner should setKeyFields to upsert table sink
URL: https://github.com/apache/flink/pull/9195
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9224: [FLINK-13266] [connectors / kafka] Fix race condition between transaction commit and produc…

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9224: [FLINK-13266] [connectors / kafka] 
Fix race condition between transaction commit and produc…
URL: https://github.com/apache/flink/pull/9224#issuecomment-514889442
 
 
   ## CI report:
   
   * 78b445e8a5fb88c8c7f7080f1e6d9fdf89a5a594 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120548015)
   * bc9ccf9cf50aa4f288ca1eb5226234b53fe091d0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120806289)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog 
should add hive version in conf
URL: https://github.com/apache/flink/pull/9232#issuecomment-515295549
 
 
   ## CI report:
   
   * 199e67fb426db2cf4389112ba914089e5dee3f35 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120801913)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13427) HiveCatalog's createFunction not work when function name is upper

2019-07-25 Thread Rui Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893309#comment-16893309
 ] 

Rui Li commented on FLINK-13427:


This is a bug of Hive metastore API. Name is not normalized when a function is 
created. But it's normalized to lower case when getting a function from HMS. 
Thus any function created with a name containing upper case characters cannot 
be retrieved later on.
Since HiveCatalog is case-insensitive, I think we can normalize the function 
name on our side.

> HiveCatalog's createFunction not work when function name is upper
> -
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13413) Put flink-table jar under lib folder

2019-07-25 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-13413.
---
Resolution: Duplicate

This will be resolved in FLINK-13399.

> Put flink-table jar under lib folder
> 
>
> Key: FLINK-13413
> URL: https://issues.apache.org/jira/browse/FLINK-13413
> Project: Flink
>  Issue Type: Improvement
>  Components: Release System
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Major
>
> Now flink-table jar is in opt folder. Since we plan to make flink-table api 
> as the first citizen of flink, we should put it under lib folder, otherwise 
> it is not convenient to use some tools that use flink table api, such as 
> flink scala shell, sql client and etc. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13413) Put flink-table jar under lib folder

2019-07-25 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893303#comment-16893303
 ] 

Jark Wu commented on FLINK-13413:
-

Hi [~Terry1897], I think this issue is duplicated with FLINK-13399. And Timo 
has started working on FLINK-13399. 

> Put flink-table jar under lib folder
> 
>
> Key: FLINK-13413
> URL: https://issues.apache.org/jira/browse/FLINK-13413
> Project: Flink
>  Issue Type: Improvement
>  Components: Release System
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Major
>
> Now flink-table jar is in opt folder. Since we plan to make flink-table api 
> as the first citizen of flink, we should put it under lib folder, otherwise 
> it is not convenient to use some tools that use flink table api, such as 
> flink scala shell, sql client and etc. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13413) Put flink-table jar under lib folder

2019-07-25 Thread Terry Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893302#comment-16893302
 ] 

Terry Wang commented on FLINK-13413:


[~jark] I'd like to take this ticket, please feel free to assign it to me.

> Put flink-table jar under lib folder
> 
>
> Key: FLINK-13413
> URL: https://issues.apache.org/jira/browse/FLINK-13413
> Project: Flink
>  Issue Type: Improvement
>  Components: Release System
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Major
>
> Now flink-table jar is in opt folder. Since we plan to make flink-table api 
> as the first citizen of flink, we should put it under lib folder, otherwise 
> it is not convenient to use some tools that use flink table api, such as 
> flink scala shell, sql client and etc. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] beyond1920 edited a comment on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-07-25 Thread GitBox
beyond1920 edited a comment on issue #9230: [FLINK-13430][build] Configure 
sending travis build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-515297857
 
 
   @zentol @wuchong Sending notifications for contributor repositories as well 
seems not be useless as it looks like. 
   I could receive build notifications from my own Flink repository and public 
Flink repository by 
   subscribing bui...@flink.apache.org mail-list and custom my own email 
forwarding rules.
   Otherwise, I may need custom travis config for my own repository. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] beyond1920 edited a comment on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-07-25 Thread GitBox
beyond1920 edited a comment on issue #9230: [FLINK-13430][build] Configure 
sending travis build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-515297857
 
 
   @zentol @wuchong Sending notifications for contributor repositories as well 
seems not be useless as it looks like. 
   I could receive build notifications from my own Flink repository and public 
Flink repository by 
   subscribing bui...@flink.apache.org mail-list and custom my own email 
forwarding rules.
   Otherwise, I may need custom travis config in my own repository. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] beyond1920 commented on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-07-25 Thread GitBox
beyond1920 commented on issue #9230: [FLINK-13430][build] Configure sending 
travis build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-515297857
 
 
   @zentol @wuchong Sending notifications for contributor repositories as well 
seems not be useless as it looks like. 
   I could receives build notifications from my own Flink repository and public 
Flink repository by 
   subscribe builds mail-list and custom my own email forwarding rules.
   Otherwise, I may need custom travis config in my own repository. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-25 Thread GitBox
flinkbot commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add 
hive version in conf
URL: https://github.com/apache/flink/pull/9232#issuecomment-515295549
 
 
   ## CI report:
   
   * 199e67fb426db2cf4389112ba914089e5dee3f35 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120801913)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-25 Thread GitBox
lirui-apache commented on issue #9232: [FLINK-13424][hive] HiveCatalog should 
add hive version in conf
URL: https://github.com/apache/flink/pull/9232#issuecomment-515293928
 
 
   cc @xuefuz @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-25 Thread GitBox
flinkbot commented on issue #9232: [FLINK-13424][hive] HiveCatalog should add 
hive version in conf
URL: https://github.com/apache/flink/pull/9232#issuecomment-515294016
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache opened a new pull request #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-25 Thread GitBox
lirui-apache opened a new pull request #9232: [FLINK-13424][hive] HiveCatalog 
should add hive version in conf
URL: https://github.com/apache/flink/pull/9232
 
 
   
   
   ## What is the purpose of the change
   
   To avoid overriding the hive version users specify in the yaml file.
   
   
   ## Brief change log
   
 - Add hive version to hive conf in `HiveCatalog`
 - Hive table factory and source/sink will retrieve hive version from hive 
conf. And errors out if hive version is not present.
   
   
   ## Verifying this change
   
   Existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13424) HiveCatalog should add hive version in conf

2019-07-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13424:
---
Labels: pull-request-available  (was: )

> HiveCatalog should add hive version in conf
> ---
>
> Key: FLINK-13424
> URL: https://issues.apache.org/jira/browse/FLINK-13424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> {{HiveTableSource}} and {{HiveTableSink}} retrieve hive version from conf. 
> Therefore {{HiveCatalog}} has to add it.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-07-25 Thread GitBox
wuchong commented on issue #9230: [FLINK-13430][build] Configure sending travis 
build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-515286056
 
 
   That's a good question @zentol . How do we handle this for slack channel? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13252) Common CachedLookupFunction for All connector

2019-07-25 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13252:

Component/s: Table SQL / API

> Common CachedLookupFunction for All connector
> -
>
> Key: FLINK-13252
> URL: https://issues.apache.org/jira/browse/FLINK-13252
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common, Table SQL / API
>Reporter: Chance Li
>Assignee: Chance Li
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> shortly, it's a decorator pattern:
>  # A CachedLookupFunction extends TableFunction
>  # when needing the cache feature, the only thing is to construct this 
> CachedLookupFunction with the real LookupFunction's instance.  so it's can be 
> used by any connector.
>  # CachedLookupFunction will send the result directly if data has been cached 
> or, to invoke the real LookupFunction to get data and send it after this data 
> has been cached.
>  # will have more cache strategies such as All.
> should add a new module called flink-connector-common.
> we also can provide a common Async LookupFunction using this pattern instead 
> of too much implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
wuchong commented on issue #9231: [FLINK-13252] Common CachedLookupFunction for 
All connector
URL: https://github.com/apache/flink/pull/9231#issuecomment-515284486
 
 
   cc @KurtYoung 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9088: [FLINK-13012][hive] Handle default partition name of Hive table

2019-07-25 Thread GitBox
lirui-apache commented on issue #9088: [FLINK-13012][hive] Handle default 
partition name of Hive table
URL: https://github.com/apache/flink/pull/9088#issuecomment-515280143
 
 
   Thanks @xuefuz.
   @wuchong could you please help review and merge this PR. Travis test has 
passed: https://travis-ci.com/flink-ci/flink/builds/120691298


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13335) Align the SQL CREATE TABLE DDL with FLIP-37

2019-07-25 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13335:
---
Priority: Blocker  (was: Critical)

> Align the SQL CREATE TABLE DDL with FLIP-37
> ---
>
> Key: FLINK-13335
> URL: https://issues.apache.org/jira/browse/FLINK-13335
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Danny Chan
>Priority: Blocker
> Fix For: 1.9.0
>
>
> At a first glance it does not seem that the newly introduced DDL is compliant 
> with FLIP-37. We should ensure consistent behavior esp. also for corner cases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zentol commented on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-07-25 Thread GitBox
zentol commented on issue #9230: [FLINK-13430][build] Configure sending travis 
build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-515229761
 
 
   how can we prevent this from sending notifications for contributor 
repositories?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119406891)
   * 7579df06b6a0bf799e8a9c2bcb09984bf52c8e8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441302)
   * ccb9dc29d4755d0a6c4596e08743b38615eb276a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120480063)
   * 1b976f30a689d9bdbf65513f034b2954bfb91468 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120494302)
   * 3ccee75dd0d506b90a2019cde9045eee26a4f4d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120749125)
   * e07648c718b4ea32a3f02f826ca6a337400572be : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769160)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119406891)
   * 7579df06b6a0bf799e8a9c2bcb09984bf52c8e8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441302)
   * ccb9dc29d4755d0a6c4596e08743b38615eb276a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120480063)
   * 1b976f30a689d9bdbf65513f034b2954bfb91468 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120494302)
   * 3ccee75dd0d506b90a2019cde9045eee26a4f4d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120749125)
   * e07648c718b4ea32a3f02f826ca6a337400572be : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769160)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307465052
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecoratorTest.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.mockito.Mockito.mock;
+
+/**
+ * CachedLookupTableSourceTest.
+ */
+public class CachedLookupFunctionDecoratorTest {
 
 Review comment:
   How about add a test to verify the cache expired?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307457693
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/LookupFunctionInvoker.java
 ##
 @@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.TableFunction;
+
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+
+/**
+ * LookupFunctionInvoker is mainly to handle the parameter type.
+ */
+class LookupFunctionInvoker implements InvocationHandler {
 
 Review comment:
   Why not set this class as `public` or private within 
`CachedLookupFunctionDecorator`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307459072
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/LookupFunctionInvoker.java
 ##
 @@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.TableFunction;
+
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+
+/**
+ * LookupFunctionInvoker is mainly to handle the parameter type.
+ */
+class LookupFunctionInvoker implements InvocationHandler {
+
+   private final TableFunction tableFunction;
+   private volatile Method realMethod = null;
 
 Review comment:
   Currently, the `realMethod` could only have one choice: `eval`, how about 
rename this to `evalMethod`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307456719
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecorator.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;
+
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * CachedLookupableTableSource.
+ * LIMITATION: now the function eval of the lookupTableSource implementation 
only supports parameter as Object or Object...
+ * TODO: in the future, to extract the parameter type from the Method, but I 
think it's not much urgent.
 
 Review comment:
   IMO, it would be enough to leave TODO just in the implementation of code not 
in the javadoc of this class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307450367
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecorator.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;
+
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * CachedLookupableTableSource.
+ * LIMITATION: now the function eval of the lookupTableSource implementation 
only supports parameter as Object or Object...
+ * TODO: in the future, to extract the parameter type from the Method, but I 
think it's not much urgent.
+ */
+public class CachedLookupFunctionDecorator extends TableFunction {
+   //default 1day.
+   private static final long EXPIRED_TIME_MS_DEFAULT = 24 * 60 * 60 * 1000;
+   private final TableFunction lookupTableSource;
+   private transient Cache> cache;
+   private transient LookupFunctionInvoker.Evaluation realEval;
+   private CollectorProxy collectorProxy;
+   private final long expireTimeMS;
+   private final long maximumSize;
+   private final boolean recordStat;
+   private final boolean isVariable;
 
 Review comment:
   How about change this field name to `isVarArgs`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307463620
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecoratorTest.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.mockito.Mockito.mock;
+
+/**
+ * CachedLookupTableSourceTest.
+ */
+public class CachedLookupFunctionDecoratorTest {
+   private final long maximnCacheSize = 1 * 1024 * 1024L;
+   List result = new ArrayList<>();
+   Collector testCollector = new Collector() {
+   @Override
+   public void collect(Row record) {
+   result.add(record);
+   }
+
+   @Override
+   public void close() {
+
+   }
+   };
+
+   @Test
+   public void testEvalVariableObjectKey() throws Exception {
+   result.clear();
+   CachedLookupFunctionDecorator cachedLookupTableSource = 
new CachedLookupFunctionDecorator<>(new TestLookupFucntion(),
+   maximnCacheSize);
+   cachedLookupTableSource.setCollector(testCollector);
+   cachedLookupTableSource.open(mock(FunctionContext.class));
+
+   Cache> cache = 
cachedLookupTableSource.getCache();
+   //cache still have no data.
+   Assert.assertEquals(0, cache.size());
+   cachedLookupTableSource.eval("1");
+   //load into cache and emit correctly.
+   Assert.assertEquals(1, cache.size());
+   Assert.assertEquals(5, result.size());
+   Assert.assertEquals(cache.getIfPresent(Row.of("1")), result);
+   List expected = Lists.newArrayList(Row.of("1", "0"),
+   Row.of("1", "1"),
+   Row.of("1", "2"),
+   Row.of("1", "3"),
+   Row.of("1", "4"));
+   Assert.assertEquals(expected, result);
+
+   // cache hit.
+   cachedLookupTableSource.eval("1");
+   Assert.assertEquals(1, cache.size());
+   Assert.assertEquals(10, result.size());
+   List expected2 = cache.getIfPresent(Row.of("1"));
+   expected2.addAll(cache.getIfPresent(Row.of("1")));
+   Assert.assertEquals(expected2, result);
+
+   cachedLookupTableSource.eval("2");
+   Assert.assertEquals(2, cache.size());
+   Assert.assertEquals(15, result.size());
+   expected2.addAll(cache.getIfPresent(Row.of("2")));
+   Assert.assertEquals(expected2, result);
+
+   cachedLookupTableSource.eval("3", "4");
+   Assert.assertEquals(3, cache.size());
+   Assert.assertEquals(20, result.size());
+   expected2.addAll(cache.getIfPresent(Row.of("3", "4")));
+   Assert.assertEquals(expected2, result);
+
+   expected = Lists.newArrayList(Row.of("3", "4", "0"),
+   Row.of("3", "4", "1"),
+   Row.of("3", "4", "2"),
+   Row.of("3", "4", "3"),
+   Row.of("3", "4", "4"));
+   List multKey = cache.getIfPresent(Row.of("3", "4"));
+   Assert.assertEquals(expected, multKey);
+   cachedLookupTableSource.close();
+
+   }
+
+   @Test
+   public void testWithMaxSize() throws Exception {
+   result.clear();
+   //set maxSize is 1, that means only can hold one key in 

[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307464222
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/LookupFunctionInvokerTest.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * LookupFunctionInvokerTest.
+ */
+public class LookupFunctionInvokerTest {
+   @Test
+   public void testNormal() {
+   AtomicInteger vaviableObjectCount = new AtomicInteger(0);
+   TableFunction functionMulit = new TableFunction() {
 
 Review comment:
   typo, `functionMulti`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307463872
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/LookupFunctionInvokerTest.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * LookupFunctionInvokerTest.
+ */
+public class LookupFunctionInvokerTest {
+   @Test
+   public void testNormal() {
+   AtomicInteger vaviableObjectCount = new AtomicInteger(0);
 
 Review comment:
   typo, `variableObjectCount`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307464504
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecoratorTest.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.mockito.Mockito.mock;
+
+/**
+ * CachedLookupTableSourceTest.
+ */
+public class CachedLookupFunctionDecoratorTest {
+   private final long maximnCacheSize = 1 * 1024 * 1024L;
+   List result = new ArrayList<>();
+   Collector testCollector = new Collector() {
+   @Override
+   public void collect(Row record) {
+   result.add(record);
+   }
+
+   @Override
+   public void close() {
+
+   }
+   };
+
+   @Test
+   public void testEvalVariableObjectKey() throws Exception {
+   result.clear();
+   CachedLookupFunctionDecorator cachedLookupTableSource = 
new CachedLookupFunctionDecorator<>(new TestLookupFucntion(),
+   maximnCacheSize);
+   cachedLookupTableSource.setCollector(testCollector);
+   cachedLookupTableSource.open(mock(FunctionContext.class));
+
+   Cache> cache = 
cachedLookupTableSource.getCache();
+   //cache still have no data.
+   Assert.assertEquals(0, cache.size());
+   cachedLookupTableSource.eval("1");
+   //load into cache and emit correctly.
+   Assert.assertEquals(1, cache.size());
+   Assert.assertEquals(5, result.size());
+   Assert.assertEquals(cache.getIfPresent(Row.of("1")), result);
+   List expected = Lists.newArrayList(Row.of("1", "0"),
+   Row.of("1", "1"),
+   Row.of("1", "2"),
+   Row.of("1", "3"),
+   Row.of("1", "4"));
+   Assert.assertEquals(expected, result);
+
+   // cache hit.
+   cachedLookupTableSource.eval("1");
+   Assert.assertEquals(1, cache.size());
+   Assert.assertEquals(10, result.size());
+   List expected2 = cache.getIfPresent(Row.of("1"));
+   expected2.addAll(cache.getIfPresent(Row.of("1")));
+   Assert.assertEquals(expected2, result);
+
+   cachedLookupTableSource.eval("2");
+   Assert.assertEquals(2, cache.size());
+   Assert.assertEquals(15, result.size());
+   expected2.addAll(cache.getIfPresent(Row.of("2")));
+   Assert.assertEquals(expected2, result);
+
+   cachedLookupTableSource.eval("3", "4");
+   Assert.assertEquals(3, cache.size());
+   Assert.assertEquals(20, result.size());
+   expected2.addAll(cache.getIfPresent(Row.of("3", "4")));
+   Assert.assertEquals(expected2, result);
+
+   expected = Lists.newArrayList(Row.of("3", "4", "0"),
+   Row.of("3", "4", "1"),
+   Row.of("3", "4", "2"),
+   Row.of("3", "4", "3"),
+   Row.of("3", "4", "4"));
+   List multKey = cache.getIfPresent(Row.of("3", "4"));
+   Assert.assertEquals(expected, multKey);
+   cachedLookupTableSource.close();
+
+   }
+
+   @Test
+   public void testWithMaxSize() throws Exception {
+   result.clear();
+   //set maxSize is 1, that means only can hold one key in 

[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307460899
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecorator.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;
+
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * CachedLookupableTableSource.
+ * LIMITATION: now the function eval of the lookupTableSource implementation 
only supports parameter as Object or Object...
+ * TODO: in the future, to extract the parameter type from the Method, but I 
think it's not much urgent.
+ */
+public class CachedLookupFunctionDecorator extends TableFunction {
+   //default 1day.
+   private static final long EXPIRED_TIME_MS_DEFAULT = 24 * 60 * 60 * 1000;
+   private final TableFunction lookupTableSource;
+   private transient Cache> cache;
+   private transient LookupFunctionInvoker.Evaluation realEval;
+   private CollectorProxy collectorProxy;
+   private final long expireTimeMS;
+   private final long maximumSize;
+   private final boolean recordStat;
 
 Review comment:
   How about rename to `recordStats`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307451707
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecorator.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;
+
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * CachedLookupableTableSource.
+ * LIMITATION: now the function eval of the lookupTableSource implementation 
only supports parameter as Object or Object...
+ * TODO: in the future, to extract the parameter type from the Method, but I 
think it's not much urgent.
+ */
+public class CachedLookupFunctionDecorator extends TableFunction {
+   //default 1day.
+   private static final long EXPIRED_TIME_MS_DEFAULT = 24 * 60 * 60 * 1000;
+   private final TableFunction lookupTableSource;
+   private transient Cache> cache;
+   private transient LookupFunctionInvoker.Evaluation realEval;
+   private CollectorProxy collectorProxy;
+   private final long expireTimeMS;
+   private final long maximumSize;
+   private final boolean recordStat;
+   private final boolean isVariable;
+
+   public CachedLookupFunctionDecorator(TableFunction 
lookupTableSource, long maximumSize) {
+   this(lookupTableSource, maximumSize, EXPIRED_TIME_MS_DEFAULT);
+   }
+
+   public CachedLookupFunctionDecorator(TableFunction 
lookupTableSource, long maximumSize, long expireTimeMs) {
+   this(lookupTableSource, maximumSize, expireTimeMs, true);
+   }
+
+   public CachedLookupFunctionDecorator(
+   TableFunction lookupTableSource, long maximumSize, long 
expireTimeMs, boolean recordStat) {
+   this.lookupTableSource = lookupTableSource;
+   this.maximumSize = maximumSize;
+   this.expireTimeMS = expireTimeMs;
+   this.recordStat = recordStat;
+   this.isVariable = checkMethodVariable("eval", 
lookupTableSource.getClass());
+   }
+
+   @Override
+   public void open(FunctionContext context) throws Exception {
+   lookupTableSource.open(context);
+   collectorProxy = new CollectorProxy();
+   lookupTableSource.setCollector(collectorProxy);
+   LookupFunctionInvoker lookupFunctionInvoker = new 
LookupFunctionInvoker(lookupTableSource);
+   realEval = lookupFunctionInvoker.getProxy();
+   CacheBuilder cacheBuilder = 
CacheBuilder.newBuilder().expireAfterWrite(expireTimeMS,
+   TimeUnit.MILLISECONDS).maximumSize(maximumSize);
+   if (this.recordStat) {
+   cacheBuilder.recordStats();
+   }
+   this.cache = cacheBuilder.build();
+   }
+
+   @Override
+   public void close() throws Exception {
+   if (cache != null) {
+   cache.cleanUp();
+   cache = null;
+   }
+   lookupTableSource.close();
+   }
+
+   @Override
+   public TypeInformation getResultType() {
+   return lookupTableSource.getResultType();
+   }
+
+   @Override
+   public TypeInformation[] getParameterTypes(Class[] signature) {
+   return lookupTableSource.getParameterTypes(signature);
+   }
+
+   @VisibleForTesting
+   Cache> getCache() {
+   return cache;
+   }
+
+   public void eval(Object... keys) {
+

[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307464111
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/LookupFunctionInvokerTest.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * LookupFunctionInvokerTest.
+ */
+public class LookupFunctionInvokerTest {
+   @Test
+   public void testNormal() {
+   AtomicInteger vaviableObjectCount = new AtomicInteger(0);
+   TableFunction functionMulit = new TableFunction() {
+   public void eval(Object... keys) {
+   vaviableObjectCount.incrementAndGet();
+   System.out.println(Arrays.asList(keys));
 
 Review comment:
   I think this might be debug message? If so, I think this could be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307456029
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecorator.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;
+
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * CachedLookupableTableSource.
 
 Review comment:
   How about `A general lookup function decorator enhanced with the ability to 
cache recent query results.` or other more detailed descriptions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka commented on a change in pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
Myasuka commented on a change in pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#discussion_r307464890
 
 

 ##
 File path: 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/sources/decorator/CachedLookupFunctionDecoratorTest.java
 ##
 @@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.sources.decorator;
+
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.cache.Cache;
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.mockito.Mockito.mock;
+
+/**
+ * CachedLookupTableSourceTest.
+ */
+public class CachedLookupFunctionDecoratorTest {
+   private final long maximnCacheSize = 1 * 1024 * 1024L;
 
 Review comment:
   typo, `maximumCacheSize`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119406891)
   * 7579df06b6a0bf799e8a9c2bcb09984bf52c8e8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441302)
   * ccb9dc29d4755d0a6c4596e08743b38615eb276a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120480063)
   * 1b976f30a689d9bdbf65513f034b2954bfb91468 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120494302)
   * 3ccee75dd0d506b90a2019cde9045eee26a4f4d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120749125)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman closed pull request #8957: [hotfix][docs] fix broken links

2019-07-25 Thread GitBox
sjwiesman closed pull request #8957: [hotfix][docs] fix broken links
URL: https://github.com/apache/flink/pull/8957
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13390) Clarify the exact meaning of state size when executing incremental checkpoint

2019-07-25 Thread Yun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893042#comment-16893042
 ] 

Yun Tang commented on FLINK-13390:
--

BTW, [~fhueske] please assign this ticket to me if you feel fine. IMO, this 
issue should focus on clarifying the meaning via docs, logs and web UI.

> Clarify the exact meaning of state size when executing incremental checkpoint
> -
>
> Key: FLINK-13390
> URL: https://issues.apache.org/jira/browse/FLINK-13390
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Yun Tang
>Priority: Major
> Fix For: 1.10.0
>
>
> This issue is inspired from [a user 
> mail|https://lists.apache.org/thread.html/56069ce869afbfca66179e89788c05d3b092e3fe363f3540dcdeb7a1@%3Cuser.flink.apache.org%3E]
>  which confused about the state size meaning.
> I think changing the description of state size and add some notices on 
> documentation could help this. Moreover, change the log when complete 
> checkpoint should be also taken into account.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231#issuecomment-515120529
 
 
   ## CI report:
   
   * b7f4569b21af8e9241f8e0b2094290e7909c09a4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120741670)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13422) git doc file's link not display correct

2019-07-25 Thread Yun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang closed FLINK-13422.

Resolution: Not A Problem

> git doc file's link not display correct
> ---
>
> Key: FLINK-13422
> URL: https://issues.apache.org/jira/browse/FLINK-13422
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: richt richt
>Priority: Major
> Attachments: image-2019-07-25-16-22-40-208.png
>
>
> eg.
> [https://github.com/apache/flink/blob/master/docs/dev/table/hive_integration.md]
>  
> !image-2019-07-25-16-22-40-208.png!
> it maybe a hyperlink or some picture . but i cannot read id now 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119406891)
   * 7579df06b6a0bf799e8a9c2bcb09984bf52c8e8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441302)
   * ccb9dc29d4755d0a6c4596e08743b38615eb276a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120480063)
   * 1b976f30a689d9bdbf65513f034b2954bfb91468 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120494302)
   * 3ccee75dd0d506b90a2019cde9045eee26a4f4d5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120749125)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on issue #9229: [FLINK-13279][table-sql-client] Fully qualify sink name in sql-client

2019-07-25 Thread GitBox
xuefuz commented on issue #9229:  [FLINK-13279][table-sql-client] Fully qualify 
sink name in sql-client 
URL: https://github.com/apache/flink/pull/9229#issuecomment-515144417
 
 
   Thanks to @dawidwys for the fix, which seems good to me. Here I'd like to 
reemphasize an observation for future consideration.
   
   In our original design of CatalogManager, we wanted it to be only aware of 
multiple catalogs with one being the current. CatalogManager doesn't need to 
treat any one of them specially. Each catalog may has its own capability and 
restriction, but upper layers doesn't need to care about. This design would 
avoid such pervasive, special treatment of the built-in catalog as we see in 
this PR.
   
   While it's too late to revisit this, I'd suggest we open up the discussion 
again post 1.9.
   
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-playgrounds] sjwiesman commented on issue #1: [FLINK-12749] [playgrounds] initial version of flink-cluster-playground

2019-07-25 Thread GitBox
sjwiesman commented on issue #1: [FLINK-12749] [playgrounds] initial version of 
flink-cluster-playground
URL: https://github.com/apache/flink-playgrounds/pull/1#issuecomment-515143875
 
 
   @fhueske are there release steps documented somewhere? This repo might not 
have a lot of visibility within the flink-dev community, and we should make 
sure release managers are aware of it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9230: [FLINK-13430][build] Configure 
sending travis build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-515092299
 
 
   ## CI report:
   
   * 465011b7a8f0ae4774f8c9c47ec03e8f58edaa2b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120729445)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
sjwiesman commented on issue #8903: [FLINK-12747][docs] Getting Started - Table 
API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-515142560
 
 
   Sorry about that @NicoK, should be taken care of now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on a change in pull request #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
sjwiesman commented on a change in pull request #8903: [FLINK-12747][docs] 
Getting Started - Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#discussion_r307369349
 
 

 ##
 File path: 
flink-walkthroughs/flink-walkthrough-table-java/src/main/resources/archetype-resources/pom.xml
 ##
 @@ -0,0 +1,263 @@
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+   4.0.0
+
+   ${groupId}
+   ${artifactId}
+   ${version}
+   jar
+
+   Flink Walkthrough Table Java
+   http://www.myorganization.org
+
+   
+   
UTF-8
+   @project.version@
+   1.8
+   2.11
+   ${java.version}
+   ${java.version}
+   
+
+   
+   
+   apache.snapshots
+   Apache Development Snapshot Repository
+   
https://repository.apache.org/content/repositories/snapshots/
+   
+   false
+   
+   
+   true
+   
+   
+   
+
+   
+   
+   org.apache.flink
+   
flink-walkthrough-common_${scala.binary.version}
+   ${flink.version}
+   
+
+   
+   
+   org.apache.flink
+   flink-java
+   ${flink.version}
+   provided
+   
+   
+   org.apache.flink
+   
flink-streaming-java_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+
+   
+   
+   org.apache.flink
+   
flink-table-api-java-bridge_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${flink.version}
+   provided
 
 Review comment:
   I've added myself as a watcher on that ticket. I will follow and update this 
pr accordingly. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen commented on issue #9132: [FLINK-13245][network] Fix the bug of file resource leak while canceling partition request

2019-07-25 Thread GitBox
StephanEwen commented on issue #9132: [FLINK-13245][network] Fix the bug of 
file resource leak while canceling partition request
URL: https://github.com/apache/flink/pull/9132#issuecomment-515129785
 
 
   @zhijiangW @NicoK @azagrebin @zentol 
   
   I addressed the review comments, put some minor changes on top to make 
consumption notifications idempotent. That allows also for some more slight 
simplification/cleanup. Will open a PR with this as soon as I the tests passed 
once on Travis.
   
   https://github.com/StephanEwen/flink/commits/network


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
flinkbot commented on issue #9231: [FLINK-13252] Common CachedLookupFunction 
for All connector
URL: https://github.com/apache/flink/pull/9231#issuecomment-515120529
 
 
   ## CI report:
   
   * b7f4569b21af8e9241f8e0b2094290e7909c09a4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120741670)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9229: [FLINK-13279][table-sql-client] Fully qualify sink name in sql-client

2019-07-25 Thread GitBox
flinkbot edited a comment on issue #9229:  [FLINK-13279][table-sql-client] 
Fully qualify sink name in sql-client 
URL: https://github.com/apache/flink/pull/9229#issuecomment-515087495
 
 
   ## CI report:
   
   * 63d15eb2051a523b13ca094d6d100764366462e8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120727452)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
flinkbot commented on issue #9231: [FLINK-13252] Common CachedLookupFunction 
for All connector
URL: https://github.com/apache/flink/pull/9231#issuecomment-515117256
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13252) Common CachedLookupFunction for All connector

2019-07-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13252:
---
Labels: pull-request-available  (was: )

> Common CachedLookupFunction for All connector
> -
>
> Key: FLINK-13252
> URL: https://issues.apache.org/jira/browse/FLINK-13252
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Chance Li
>Assignee: Chance Li
>Priority: Minor
>  Labels: pull-request-available
>
> shortly, it's a decorator pattern:
>  # A CachedLookupFunction extends TableFunction
>  # when needing the cache feature, the only thing is to construct this 
> CachedLookupFunction with the real LookupFunction's instance.  so it's can be 
> used by any connector.
>  # CachedLookupFunction will send the result directly if data has been cached 
> or, to invoke the real LookupFunction to get data and send it after this data 
> has been cached.
>  # will have more cache strategies such as All.
> should add a new module called flink-connector-common.
> we also can provide a common Async LookupFunction using this pattern instead 
> of too much implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] chancelq opened a new pull request #9231: [FLINK-13252] Common CachedLookupFunction for All connector

2019-07-25 Thread GitBox
chancelq opened a new pull request #9231: [FLINK-13252] Common 
CachedLookupFunction for All connector
URL: https://github.com/apache/flink/pull/9231
 
 
   
   
   ## What is the purpose of the change
   1. Common CachedLookupFunction for All connector.
   2. not support Cache strategy yet. maybe in another PR.
   3. only 2 class, so no adding new module.
   
   Please let me know your opinion, and let's see whether to continue.  such as 
Cache Strategy and Common AsyncLookupFunction.
   
   ## Brief change log
   introduce CachedLookupFunctionDecorator to support cache for all connectors.
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   - *Added UT that validates CachedLookupFunctionDecorator*
   - *Added IT that validates HBaseLookupFunction use 
CachedLookupFunctionDecorator to support cache.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting 
Started - Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#discussion_r307391204
 
 

 ##
 File path: 
flink-walkthroughs/flink-walkthrough-table-scala/src/main/resources/archetype-resources/pom.xml
 ##
 @@ -0,0 +1,333 @@
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+   4.0.0
+
+   ${groupId}
+   ${artifactId}
+   ${version}
+   jar
+
+   Flink Walkthrough Table
+   http://www.myorganization.org
+
+   
+   
+   apache.snapshots
+   Apache Development Snapshot Repository
+   
https://repository.apache.org/content/repositories/snapshots/
+   
+   false
+   
+   
+   true
+   
+   
+   
+
+   
+   
UTF-8
+   @project.version@
+   2.11
+   2.11.12
+   
+
+   
+   
+   org.apache.flink
+   
flink-walkthrough-common_${scala.binary.version}
+   ${flink.version}
+   
+
+   
+   
+   
+   org.apache.flink
+   
flink-scala_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+   
+   org.apache.flink
+   
flink-streaming-scala_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+
+   
+   
+   org.scala-lang
+   scala-library
+   ${scala.version}
+   provided
+   
+
+   
+   
+   org.apache.flink
+   
flink-table-api-java-bridge_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+   
+   org.apache.flink
+   
flink-table-api-scala-bridge_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+
+   
+   org.apache.flink
+   
flink-streaming-scala_${scala.binary.version}
+   ${flink.version}
+   provided
+   
+
+   
+
+   
+
+   
+   
+   
+   org.slf4j
+   slf4j-log4j12
+   1.7.7
+   runtime
+   
+   
+   log4j
+   log4j
+   1.2.17
+   runtime
+   
+   
+
+   
+   
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   3.0.0
+   
+   
+   
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:force-shading
+   
com.google.code.findbugs:jsr305
+   
org.slf4j:*
+   
log4j:*
+   
+   
+   
+   
+   
+   
*:*
+

[GitHub] [flink] NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting 
Started - Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#discussion_r307382423
 
 

 ##
 File path: 
flink-walkthroughs/flink-walkthrough-common/src/main/java/org/apache/flink/walkthrough/common/source/TransactionSource.java
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.walkthrough.common.source;
+
+import org.apache.flink.annotation.Public;
+import org.apache.flink.streaming.api.functions.source.FromIteratorFunction;
+import org.apache.flink.walkthrough.common.entity.Transaction;
+
+import java.io.Serializable;
+import java.util.Iterator;
+
+/**
+ * A stream of transactions.
+ */
+@Public
+public class TransactionSource extends FromIteratorFunction {
+
+   public TransactionSource() {
+   super(new 
RateLimitedIterator<>(TransactionIterator.unbounded()));
+   }
+
+   private static class RateLimitedIterator implements Iterator, 
Serializable {
 
 Review comment:
   seems this is still missing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting 
Started - Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#discussion_r307384457
 
 

 ##
 File path: 
flink-walkthroughs/flink-walkthrough-common/src/main/java/org/apache/flink/walkthrough/common/table/SpendReportTableSink.java
 ##
 @@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.walkthrough.common.table;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.sinks.AppendStreamTableSink;
+import org.apache.flink.table.sinks.BatchTableSink;
+import org.apache.flink.table.sinks.TableSink;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.types.Row;
+import org.apache.flink.walkthrough.common.sink.LoggerOutputFormat;
+
+/**
+ * A simple table sink for writing to stdout.
+ */
+@PublicEvolving
+@SuppressWarnings("deprecation")
+public class SpendReportTableSink implements AppendStreamTableSink, 
BatchTableSink {
+
+   private final TableSchema schema;
+
+   public SpendReportTableSink() {
+   this.schema = TableSchema
+   .builder()
+   .field("accountId", Types.LONG)
+   .field("timestamp", Types.SQL_TIMESTAMP)
+   .field("amount", Types.DOUBLE)
+   .build();
+   }
+
+   @Override
+   public void emitDataSet(DataSet dataSet) {
+   dataSet
+   .map(SpendReportTableSink::format)
+   .output(new LoggerOutputFormat());
+   }
+
+   @Override
+   public void emitDataStream(DataStream dataStream) {
+   dataStream
+   .map(SpendReportTableSink::format)
+   .writeUsingOutputFormat(new LoggerOutputFormat());
+   }
+
+   @Override
+   public TableSchema getTableSchema() {
+   return schema;
+   }
+
+   @Override
+   public DataType getConsumedDataType() {
+   return getTableSchema().toRowDataType();
+   }
+
+   @Override
+   public String[] getFieldNames() {
+   return getTableSchema().getFieldNames();
+   }
+
+   @Override
+   public TypeInformation[] getFieldTypes() {
+   return getTableSchema().getFieldTypes();
+   }
+
+   @Override
+   public TableSink configure(String[] fieldNames, 
TypeInformation[] fieldTypes) {
+   return this;
+   }
+
+   private static String format(Row row) {
 
 Review comment:
   FYI: IntelliJ provided that for me as one option of fixing this warning


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-25 Thread GitBox
NicoK commented on a change in pull request #8903: [FLINK-12747][docs] Getting 
Started - Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#discussion_r307383048
 
 

 ##
 File path: 
flink-walkthroughs/flink-walkthrough-common/src/main/java/org/apache/flink/walkthrough/common/sink/LoggerOutputFormat.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.walkthrough.common.sink;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.io.OutputFormat;
+import org.apache.flink.configuration.Configuration;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A basic output format that logs all records at level INFO.
+ */
+@Internal
+public class LoggerOutputFormat implements OutputFormat {
 
 Review comment:
   ```suggestion
   public class LoggerOutputFormat implements OutputFormat {
   
private static final long serialVersionUID = 1L;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13222) Add documentation for AdaptedRestartPipelinedRegionStrategyNG

2019-07-25 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-13222.
---
Resolution: Fixed

Fixed via
1.10.0: ebe14806b46d89c852aaccca5cec1ed3e3ead136
1.9.0: e5e0d822cec984a9115e1b1c4191a50ffd87e86e

> Add documentation for AdaptedRestartPipelinedRegionStrategyNG
> -
>
> Key: FLINK-13222
> URL: https://issues.apache.org/jira/browse/FLINK-13222
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Gary Yao
>Assignee: Zhu Zhu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It should be documented that if {{jobmanager.execution.failover-strategy}} is 
> set to _region_, the new pipelined region failover strategy 
> ({{AdaptedRestartPipelinedRegionStrategyNG}}) will be used. 
> *Acceptance Criteria*
> * config values _region_ and _full_ are documented
> * to be decided: config values _region-legacy_ and _individual_ remain 
> undocumented



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann closed pull request #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-25 Thread GitBox
tillrohrmann closed pull request #9113: [FLINK-13222] [runtime] Add 
documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-playgrounds] fhueske commented on a change in pull request #1: [FLINK-12749] [playgrounds] initial version of flink-cluster-playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #1: [FLINK-12749] [playgrounds] 
initial version of flink-cluster-playground
URL: https://github.com/apache/flink-playgrounds/pull/1#discussion_r307384600
 
 

 ##
 File path: README.md
 ##
 @@ -0,0 +1,21 @@
+# Apache Flink Playgrounds
+
+Apache Flink is an open source stream processing framework with powerful 
stream- and batch-
+processing capabilities.
+
+Learn more about Flink at [http://flink.apache.org/](http://flink.apache.org/)
+
+## Playgrounds
+
+This repository contains the configuration files for two Apache Flink 
playgrounds.
+
+* The [Flink Cluster Playground](../master/flink-cluster-playground) consists 
of a Flink Session Cluster, a Kafka Cluster and a simple 
+Flink Job. It is explained in much detail as part of 
+[Apache Flink's "Getting Started" 
guide](https://ci.apache.org/projects/flink/flink-docs-stable/getting-started/docker-playgrounds/flink-cluster-playground.html).
 
+
+* The interactive SQL playground is still under development and will be added 
shortly.
+
+## About
+
+Apache Flink is an open source project of The Apache Software Foundation (ASF).
+The Apache Flink project originated from the 
[Stratosphere](http://stratosphere.eu) research project.
 
 Review comment:
   I think we can remove this line ;-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-playgrounds] fhueske commented on a change in pull request #1: [FLINK-12749] [playgrounds] initial version of flink-cluster-playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #1: [FLINK-12749] [playgrounds] 
initial version of flink-cluster-playground
URL: https://github.com/apache/flink-playgrounds/pull/1#discussion_r306901158
 
 

 ##
 File path: flink-cluster-playground/docker-compose.yaml
 ##
 @@ -0,0 +1,72 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+version: "2.1"
+services:
+  client:
+image: flink:1.9-scala_2.11
+command: "flink run -d -p 2 
/opt/flink/examples/streaming/ClickEventCount.jar --bootstrap.servers 
kafka:9092 --checkpointing --event-time"
+depends_on:
+  - jobmanager
+  - kafka
+volumes:
+  - ./conf:/opt/flink/conf
+environment:
+  - JOB_MANAGER_RPC_ADDRESS=jobmanager
+  clickevent-generator:
+image: flink:1.9-scala_2.11
+command: "java -classpath 
/opt/flink/examples/streaming/ClickEventCount.jar:/opt/flink/lib/* 
org.apache.flink.streaming.examples.windowing.clickeventcount.ClickEventGenerator
 --bootstrap.servers kafka:9092 --topic input"
+depends_on:
+  - kafka
+  jobmanager:
+image: flink:1.9-scala_2.11
+command: "jobmanager.sh start-foreground"
+ports:
+  - 8081:8081
+volumes:
+  - ./conf:/opt/flink/conf
+  - flink-checkpoint-directory:/tmp/flink-checkpoint-directory
+  - /tmp/flink-savepoints-directory:/tmp/flink-savepoints-directory
+environment:
+  - JOB_MANAGER_RPC_ADDRESS=jobmanager
+  taskmanager:
+image: flink:1.9-SNAPSHOT-scala_2.11
 
 Review comment:
   rm `SNAPSHOT`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9113: [FLINK-13222] [runtime] Add documentation for failover strategy option

2019-07-25 Thread GitBox
tillrohrmann commented on issue #9113: [FLINK-13222] [runtime] Add 
documentation for failover strategy option
URL: https://github.com/apache/flink/pull/9113#issuecomment-515110135
 
 
   Thanks a lot for addressing the comments @zhuzhurk. Merging this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307342896
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307322583
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307375004
 
 

 ##
 File path: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventCount.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.examples.windowing.clickeventcount;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
+import org.apache.flink.streaming.api.windowing.time.Time;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.ClickEventStatisticsCollector;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.CountingAggregator;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEvent;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventDeserializationSchema;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatistics;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatisticsSerializationSchema;
+
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A simple streaming job reading {@link ClickEvent}s from Kafka, counting 
events per minute and
+ * writing the resulting {@link ClickEventStatistics} back to Kafka.
+ *
+ *  It can be run with or without checkpointing and with event time or 
processing time semantics.
+ * 
+ *
+ *
+ */
+public class ClickEventCount {
+
+   public static final String CHECKPOINTING_OPTION = "checkpointing";
+   public static final String EVENT_TIME_OPTION = "event-time";
+
+   public static final Time WINDOW_SIZE = Time.of(60, TimeUnit.SECONDS);
 
 Review comment:
   Reduce the time to 10 or 15 seconds to make the job more interactive?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306893764
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -23,5 +23,658 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
 * This will be replaced by the TOC
 {:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
 
 Review comment:
   Link the first `Job` to 
http://localhost:4000/concepts/glossary.html#flink-job


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307373250
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
 
 Review comment:
   Might be a bit repetitive with the TOC but, I think it's important that the 
intro makes the readers curious


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307346693
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307320030
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307343960
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307355355
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307306170
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
 
 Review comment:
   Setup sounds a bit "static". Maybe a section title like "Starting the 
Playground" would feel a more interactive?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307325048
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306898498
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
 
 Review comment:
   Extend to:
   > There are six different `page`s and we generate 1000 click events per page 
and minute. Hence, the output of the Flink job should show 1000 views per page 
and window.
   
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307341571
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307306440
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306896929
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307372507
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
 
 Review comment:
   Mention more explicitly what the user will learn to make it more appealing?
   
   > In this playground you will see to how to run Jobs on Flink and get a good 
impression of how to manage and operate them. More specifically you will:
   > 
   > * Run a Flink Job
   > * Monitor the Job with Flink's Web UI
   > * Control the Job via Flink's REST interface and CLI client.
   > * Experience how the Job recovers from failures
   > * Upgrade and rescale the Job
   >* Query Job metrics via the REST interface


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306894460
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
 
 Review comment:
   Maybe mention that the client container is not needed by the Flink cluster 
but just here for ease of use for the playground?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306902320
 
 

 ##
 File path: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventCount.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.examples.windowing.clickeventcount;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
+import org.apache.flink.streaming.api.windowing.time.Time;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.ClickEventStatisticsCollector;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.CountingAggregator;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEvent;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventDeserializationSchema;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatistics;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatisticsSerializationSchema;
+
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A simple streaming job reading {@link ClickEvent}s from Kafka, counting 
events per minute and
+ * writing the resulting {@link ClickEventStatistics} back to Kafka.
+ *
+ *  It can be run with or without checkpointing and with event time or 
processing time semantics.
+ * 
+ *
+ *
+ */
+public class ClickEventCount {
+
+   public static final String CHECKPOINTING_OPTION = "checkpointing";
+   public static final String EVENT_TIME_OPTION = "event-time";
+
+   public static final Time WINDOW_SIZE = Time.of(60, TimeUnit.SECONDS);
+
+   public static void main(String[] args) throws Exception {
+   final ParameterTool params = ParameterTool.fromArgs(args);
+
+   final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 
 Review comment:
   disable operator chaining to make the job easier to follow in the web 
frontend?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307356146
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307368581
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307347062
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306896090
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307306872
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307365176
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307313845
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
 
 Review comment:
   How about 
   > "The playground environment is set up in just a few steps. We will walk 
you through the necessary commands and show how to validate that everything is 
running correctly.
   > 
   > We assume that you have ..." 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307346024
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307352100
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r306847119
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
 
 Review comment:
   `docker-compose-based` or `Docker-based`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307350932
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r30735
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307357720
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307303027
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
 
 Review comment:
   **d**ocumentation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307340861
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307321086
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-25 Thread GitBox
fhueske commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r307362783
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

  1   2   3   4   >