[jira] [Commented] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446645#comment-16446645
 ] 

ASF GitHub Bot commented on FLINK-9230:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5886
  
cc @zentol 


> WebFrontendITCase.testStopYarn is unstable
> --
>
> Key: FLINK-9230
> URL: https://issues.apache.org/jira/browse/FLINK-9230
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
> Environment: The latest commit : 
> 7d0bfd51e967eb1eb2c2869d79cb7cd13b8366dd on master branch.
>Reporter: mingleizhang
>Assignee: Sihua Zhou
>Priority: Major
>
> https://api.travis-ci.org/v3/job/369380167/log.txt
> {code:java}
> Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
> Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
> FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
> testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
> elapsed: 1.365 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - 
> in org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> Results :
> Failed tests: 
>   WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
> Found>
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0
> 02:17:10.224 [INFO] 
> 
> 02:17:10.224 [INFO] Reactor Summary:
> 02:17:10.224 [INFO] 
> 02:17:10.224 [INFO] flink-core . 
> SUCCESS [ 58.759 s]
> 02:17:10.224 [INFO] flink-java . 
> SUCCESS [ 30.613 s]
> 02:17:10.224 [INFO] flink-runtime .. 
> SUCCESS [22:26 min]
> 02:17:10.224 [INFO] flink-optimizer  
> SUCCESS [  5.508 s]
> 02:17:10.224 [INFO] flink-clients .. 
> SUCCESS [ 12.354 s]
> 02:17:10.224 [INFO] flink-streaming-java ... 
> SUCCESS [02:53 min]
> 02:17:10.224 [INFO] flink-scala  
> SUCCESS [ 18.968 s]
> 02:17:10.225 [INFO] flink-test-utils ... 
> SUCCESS [  4.826 s]
> 02:17:10.225 [INFO] flink-statebackend-rocksdb . 
> SUCCESS [ 13.644 s]
> 02:17:10.225 [INFO] flink-runtime-web .. 
> FAILURE [ 15.095 s]
> 02:17:10.225 [INFO] flink-streaming-scala .. 
> SKIPPED
> 02:17:10.225 [INFO] flink-scala-shell .. 
> SKIPPED
> 02:17:10.225 [INFO] 
> 
> 02:17:10.225 [INFO] BUILD FAILURE
> 02:17:10.225 [INFO] 
> 
> 02:17:10.225 [INFO] Total time: 28:03 min
> 02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
> 02:17:10.837 [INFO] Final Memory: 87M/711M
> 02:17:10.837 [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5886: [FLINK-9230][test] harden WebFrontendITCase#testStopYarn(...

2018-04-20 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5886
  
cc @zentol 


---


[jira] [Commented] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446643#comment-16446643
 ] 

ASF GitHub Bot commented on FLINK-9230:
---

GitHub user sihuazhou opened a pull request:

https://github.com/apache/flink/pull/5886

[FLINK-9230][test] harden WebFrontendITCase#testStopYarn()

## What is the purpose of the change

This PR aim to harden `WebFrontendITCase#testStopYarn()`.

## Brief change log

  - *harden `WebFrontendITCase#testStopYarn()`*

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

no

## Documentation

no

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sihuazhou/flink hardenTestStopYarn

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5886.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5886


commit 516dfd9dd7ac63b858c924ae57a88dde52ad1715
Author: sihuazhou 
Date:   2018-04-21T05:34:21Z

harden WebFrontendITCase#testStopYarn()




> WebFrontendITCase.testStopYarn is unstable
> --
>
> Key: FLINK-9230
> URL: https://issues.apache.org/jira/browse/FLINK-9230
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
> Environment: The latest commit : 
> 7d0bfd51e967eb1eb2c2869d79cb7cd13b8366dd on master branch.
>Reporter: mingleizhang
>Assignee: Sihua Zhou
>Priority: Major
>
> https://api.travis-ci.org/v3/job/369380167/log.txt
> {code:java}
> Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
> Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
> FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
> testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
> elapsed: 1.365 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - 
> in org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> Results :
> Failed tests: 
>   WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
> Found>
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0
> 02:17:10.224 [INFO] 
> 
> 02:17:10.224 [INFO] Reactor Summary:
> 02:17:10.224 [INFO] 
> 02:17:10.224 [INFO] flink-core . 
> SUCCESS [ 58.759 s]
> 02:17:10.224 [INFO] flink-java . 
> SUCCESS [ 30.613 s]
> 02:17:10.224 [INFO] flink-runtime .. 
> SUCCESS [22:26 min]
> 02:17:10.224 [INFO] flink-optimizer  
> SUCCESS [  5.508 s]
> 02:17:10.224 [INFO] flink-clients .. 
> SUCCESS [ 12.354 s]
> 02:17:10.224 [INFO] flink-streaming-java ... 
> SUCCESS [02:53 min]
> 02:17:10.224 [INFO] flink-scala  
> SUCCESS [ 18.968 s]
> 02:17:10.225 [INFO] flink-test-utils ... 
> SUCCESS [  4.826 s]
> 02:17:10.225 [INFO] flink-statebackend-rocksdb . 
> SUCCESS [ 13.644 s]
> 02:17:10.225 [INFO] flink-runtime-web .. 
> FAILURE [ 15.095 s]
> 02:17:10.225 [INFO] flink-streaming-scala .. 
> SKIPPED
> 02:17:10.225 [INFO] flink-scala-shell .. 
> SKIPPED
> 02:17:10.225 [INFO] 
> 
> 02:17:10.225 [INFO] BUILD FAILURE
> 02:17:10.225 [INFO] 
> 
> 02:17:10.225 [INFO] Total time: 28:03 min
> 02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
> 02:17:10.837 [INFO] Final Memory: 87M/711M
> 02:17:10.837 [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5886: [FLINK-9230][test] harden WebFrontendITCase#testSt...

2018-04-20 Thread sihuazhou
GitHub user sihuazhou opened a pull request:

https://github.com/apache/flink/pull/5886

[FLINK-9230][test] harden WebFrontendITCase#testStopYarn()

## What is the purpose of the change

This PR aim to harden `WebFrontendITCase#testStopYarn()`.

## Brief change log

  - *harden `WebFrontendITCase#testStopYarn()`*

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

no

## Documentation

no

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sihuazhou/flink hardenTestStopYarn

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5886.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5886


commit 516dfd9dd7ac63b858c924ae57a88dde52ad1715
Author: sihuazhou 
Date:   2018-04-21T05:34:21Z

harden WebFrontendITCase#testStopYarn()




---


[jira] [Assigned] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou reassigned FLINK-9230:
-

Assignee: Sihua Zhou

> WebFrontendITCase.testStopYarn is unstable
> --
>
> Key: FLINK-9230
> URL: https://issues.apache.org/jira/browse/FLINK-9230
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
> Environment: The latest commit : 
> 7d0bfd51e967eb1eb2c2869d79cb7cd13b8366dd on master branch.
>Reporter: mingleizhang
>Assignee: Sihua Zhou
>Priority: Major
>
> https://api.travis-ci.org/v3/job/369380167/log.txt
> {code:java}
> Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
> Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
> FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
> testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
> elapsed: 1.365 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - 
> in org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> Results :
> Failed tests: 
>   WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
> Found>
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0
> 02:17:10.224 [INFO] 
> 
> 02:17:10.224 [INFO] Reactor Summary:
> 02:17:10.224 [INFO] 
> 02:17:10.224 [INFO] flink-core . 
> SUCCESS [ 58.759 s]
> 02:17:10.224 [INFO] flink-java . 
> SUCCESS [ 30.613 s]
> 02:17:10.224 [INFO] flink-runtime .. 
> SUCCESS [22:26 min]
> 02:17:10.224 [INFO] flink-optimizer  
> SUCCESS [  5.508 s]
> 02:17:10.224 [INFO] flink-clients .. 
> SUCCESS [ 12.354 s]
> 02:17:10.224 [INFO] flink-streaming-java ... 
> SUCCESS [02:53 min]
> 02:17:10.224 [INFO] flink-scala  
> SUCCESS [ 18.968 s]
> 02:17:10.225 [INFO] flink-test-utils ... 
> SUCCESS [  4.826 s]
> 02:17:10.225 [INFO] flink-statebackend-rocksdb . 
> SUCCESS [ 13.644 s]
> 02:17:10.225 [INFO] flink-runtime-web .. 
> FAILURE [ 15.095 s]
> 02:17:10.225 [INFO] flink-streaming-scala .. 
> SKIPPED
> 02:17:10.225 [INFO] flink-scala-shell .. 
> SKIPPED
> 02:17:10.225 [INFO] 
> 
> 02:17:10.225 [INFO] BUILD FAILURE
> 02:17:10.225 [INFO] 
> 
> 02:17:10.225 [INFO] Total time: 28:03 min
> 02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
> 02:17:10.837 [INFO] Final Memory: 87M/711M
> 02:17:10.837 [INFO] 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9219) Add support for OpenGIS features in Table & SQL API

2018-04-20 Thread Xingcan Cui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingcan Cui reassigned FLINK-9219:
--

Assignee: Xingcan Cui

> Add support for OpenGIS features in Table & SQL API
> ---
>
> Key: FLINK-9219
> URL: https://issues.apache.org/jira/browse/FLINK-9219
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Xingcan Cui
>Priority: Major
>
> CALCITE-1968 added core functionality for handling 
> spatial/geographical/geometry data. It should not be too hard to expose these 
> features also in Flink's Table & SQL API. We would need a new {{GEOMETRY}} 
> data type and connect the function APIs.
> Right now the following functions are supported by Calcite:
> {code}
> ST_AsText, ST_AsWKT, ST_Boundary, ST_Buffer, ST_Contains, 
> ST_ContainsProperly, ST_Crosses, ST_Disjoint, ST_Distance, ST_DWithin, 
> ST_Envelope, ST_EnvelopesIntersect, ST_Equals, ST_GeometryType, 
> ST_GeometryTypeCode, ST_GeomFromText, ST_Intersects, ST_Is3D, 
> ST_LineFromText, ST_MakeLine, ST_MakePoint, ST_MLineFromText, 
> ST_MPointFromText, ST_MPolyFromText, ST_Overlaps, ST_Point, ST_PointFromText, 
> ST_PolyFromText, ST_SetSRID, ST_Touches, ST_Transform, ST_Union, ST_Within, 
> ST_Z
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9230:

Description: 
https://api.travis-ci.org/v3/job/369380167/log.txt
{code:java}
Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
elapsed: 1.365 sec  <<< FAILURE!
java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - in 
org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase

Results :

Failed tests: 
  WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
Found>

Tests run: 14, Failures: 1, Errors: 0, Skipped: 0

02:17:10.224 [INFO] 

02:17:10.224 [INFO] Reactor Summary:
02:17:10.224 [INFO] 
02:17:10.224 [INFO] flink-core . 
SUCCESS [ 58.759 s]
02:17:10.224 [INFO] flink-java . 
SUCCESS [ 30.613 s]
02:17:10.224 [INFO] flink-runtime .. 
SUCCESS [22:26 min]
02:17:10.224 [INFO] flink-optimizer  
SUCCESS [  5.508 s]
02:17:10.224 [INFO] flink-clients .. 
SUCCESS [ 12.354 s]
02:17:10.224 [INFO] flink-streaming-java ... 
SUCCESS [02:53 min]
02:17:10.224 [INFO] flink-scala  
SUCCESS [ 18.968 s]
02:17:10.225 [INFO] flink-test-utils ... 
SUCCESS [  4.826 s]
02:17:10.225 [INFO] flink-statebackend-rocksdb . 
SUCCESS [ 13.644 s]
02:17:10.225 [INFO] flink-runtime-web .. 
FAILURE [ 15.095 s]
02:17:10.225 [INFO] flink-streaming-scala .. SKIPPED
02:17:10.225 [INFO] flink-scala-shell .. SKIPPED
02:17:10.225 [INFO] 

02:17:10.225 [INFO] BUILD FAILURE
02:17:10.225 [INFO] 

02:17:10.225 [INFO] Total time: 28:03 min
02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
02:17:10.837 [INFO] Final Memory: 87M/711M
02:17:10.837 [INFO] 

{code}

  was:

{code:java}
Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
elapsed: 1.365 sec  <<< FAILURE!
java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - in 
org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase

Results :

Failed tests: 
  WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
Found>

Tests run: 14, Failures: 1, Errors: 0, Skipped: 0

02:17:10.224 [INFO] 

02:17:10.224 [INFO] Reactor Summary:
02:17:10.224 [INFO] 
02:17:10.224 [INFO] flink-core . 
SUCCESS [ 58.759 s]
02:17:10.224 [INFO] flink-java . 
SUCCESS [ 30.613 s]
02:17:10.224 [INFO] flink-runtime .. 
SUCCESS [22:26 min]
02:17:10.224 [INFO] flink-optimizer  
SUCCESS [  5.508 s]
02:17:10.224 [INFO] flink-clients .. 
SUCCESS [ 12.354 s]
02:17:10.224 [INFO] flink-streaming-java ... 
SUCCESS [02:53 min]
02:17:10.224 [INFO] flink-scala  

[jira] [Created] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9230:
---

 Summary: WebFrontendITCase.testStopYarn is unstable
 Key: FLINK-9230
 URL: https://issues.apache.org/jira/browse/FLINK-9230
 Project: Flink
  Issue Type: Improvement
  Components: Tests
 Environment: The latest commit : 
7d0bfd51e967eb1eb2c2869d79cb7cd13b8366dd on master branch.
Reporter: mingleizhang



{code:java}
Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
elapsed: 1.365 sec  <<< FAILURE!
java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - in 
org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase

Results :

Failed tests: 
  WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
Found>

Tests run: 14, Failures: 1, Errors: 0, Skipped: 0

02:17:10.224 [INFO] 

02:17:10.224 [INFO] Reactor Summary:
02:17:10.224 [INFO] 
02:17:10.224 [INFO] flink-core . 
SUCCESS [ 58.759 s]
02:17:10.224 [INFO] flink-java . 
SUCCESS [ 30.613 s]
02:17:10.224 [INFO] flink-runtime .. 
SUCCESS [22:26 min]
02:17:10.224 [INFO] flink-optimizer  
SUCCESS [  5.508 s]
02:17:10.224 [INFO] flink-clients .. 
SUCCESS [ 12.354 s]
02:17:10.224 [INFO] flink-streaming-java ... 
SUCCESS [02:53 min]
02:17:10.224 [INFO] flink-scala  
SUCCESS [ 18.968 s]
02:17:10.225 [INFO] flink-test-utils ... 
SUCCESS [  4.826 s]
02:17:10.225 [INFO] flink-statebackend-rocksdb . 
SUCCESS [ 13.644 s]
02:17:10.225 [INFO] flink-runtime-web .. 
FAILURE [ 15.095 s]
02:17:10.225 [INFO] flink-streaming-scala .. SKIPPED
02:17:10.225 [INFO] flink-scala-shell .. SKIPPED
02:17:10.225 [INFO] 

02:17:10.225 [INFO] BUILD FAILURE
02:17:10.225 [INFO] 

02:17:10.225 [INFO] Total time: 28:03 min
02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
02:17:10.837 [INFO] Final Memory: 87M/711M
02:17:10.837 [INFO] 

{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9100) Shadow/Hide password from configuration that is logged to log file

2018-04-20 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-9100:
--
Priority: Critical  (was: Major)

> Shadow/Hide password from configuration that is logged to log file
> --
>
> Key: FLINK-9100
> URL: https://issues.apache.org/jira/browse/FLINK-9100
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Szymon Szczypiński
>Assignee: Sihua Zhou
>Priority: Critical
>
> I was thinking to add small improvement to Flink. I want to add feature that 
> will hide value for key containing phrase "password". I want to this only 
> when value is logged to log file.
> I want to this because of security reason, if someone need to monitor log 
> file then value from password key will be visible in that monitoring.
> I want to change class "GlobalConfiguration" and "SecurityOptions"
> In class "GlobalConfiguration" change line
> {color:#9876aa}LOG{color}.info({color:#6a8759}"Loading configuration 
> property: {}, {}"{color}{color:#cc7832}, {color}key{color:#cc7832}, 
> {color}value){color:#cc7832};{color}
> and add code that will check that if key contain phrase "password" than value 
> will be changed to for example "***".
> The change of value i want to make when new key in class "SecurityOptions" 
> will be set to true. This new key will identifies than password should be 
> shadowed/hidden.
> What you thing about that improvement?
> This improvement is similar to FLINK-8793 for REST component.
>   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8601) Introduce ElasticBloomFilter for Approximate calculation and other situations of performance optimization

2018-04-20 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-8601:
--
Summary: Introduce ElasticBloomFilter for Approximate calculation and other 
situations of performance optimization  (was: Introduce PartitionedBloomFilter 
for Approximate calculation and other situations of performance optimization)

> Introduce ElasticBloomFilter for Approximate calculation and other situations 
> of performance optimization
> -
>
> Key: FLINK-8601
> URL: https://issues.apache.org/jira/browse/FLINK-8601
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API, State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>Priority: Major
>
> h3. Backgroud
> Bloom filter is useful in many situation, for example:
>  * 1. Approximate calculation: deduplication (eg: UV calculation)
>  * 2. Performance optimization: eg, [runtime filter 
> join|https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_runtime_filtering.html]
>By using BF, we can greatly reduce the number of queries for state 
> data in a stream join, and these filtered queries will eventually fail to 
> find any results, which is a poor performance for rocksdb-based state due to 
> traversing ```sst``` on the disk. 
> However, based on the current status provided by flink, it is hard to use the 
> bloom filter for the following reasons:
>  * 1. Serialization problem: Bloom filter status can be large (for example: 
> 100M), if implement it based on the RocksDB state, the state data will need 
> to be serialized each time it is queried and updated, and the performance 
> will be very poor.
>  * 2. Data skewed: Data in different key group can be skewed, and the 
> information of data skewed can not be accurately predicted before the program 
> is running. Therefore, it is impossible to determine how much resources bloom 
> filter should allocate. One way to do this is to allocate space needed for 
> the most skewed case, but this can lead to very serious waste of resources.
> h3. Requirement
> Therefore, I introduce the PartitionedBloomFilter for flink, which at least 
> need to meet the following features:
>  * 1. Support for changing Parallelism
>  * 2. Only serialize when necessary: when performing checkpoint
>  * 3. Can deal with data skew problem: users only need to specify a 
> PartitionedBloomFilter with the desired input, fpp, system will allocate 
> resource dynamic.
>  * 4. Do not conflict with other state: user can use KeyedState and 
> OperateState when using this bloom filter.
>  * 5. Support relax ttl (ie: the data survival time at least greater than the 
> specified time)
> Design doc:  [design 
> doc|https://docs.google.com/document/d/1s8w2dkNFDM9Fb2zoHwHY0hJRrqatAFta42T97nDXmqc/edit?usp=sharing]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9100) Shadow/Hide password from configuration that is logged to log file

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446621#comment-16446621
 ] 

ASF GitHub Bot commented on FLINK-9100:
---

Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5854
  
Could anyone have a look at this? 


> Shadow/Hide password from configuration that is logged to log file
> --
>
> Key: FLINK-9100
> URL: https://issues.apache.org/jira/browse/FLINK-9100
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Szymon Szczypiński
>Assignee: Sihua Zhou
>Priority: Major
>
> I was thinking to add small improvement to Flink. I want to add feature that 
> will hide value for key containing phrase "password". I want to this only 
> when value is logged to log file.
> I want to this because of security reason, if someone need to monitor log 
> file then value from password key will be visible in that monitoring.
> I want to change class "GlobalConfiguration" and "SecurityOptions"
> In class "GlobalConfiguration" change line
> {color:#9876aa}LOG{color}.info({color:#6a8759}"Loading configuration 
> property: {}, {}"{color}{color:#cc7832}, {color}key{color:#cc7832}, 
> {color}value){color:#cc7832};{color}
> and add code that will check that if key contain phrase "password" than value 
> will be changed to for example "***".
> The change of value i want to make when new key in class "SecurityOptions" 
> will be set to true. This new key will identifies than password should be 
> shadowed/hidden.
> What you thing about that improvement?
> This improvement is similar to FLINK-8793 for REST component.
>   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5854: [FLINK-9100][FLINK-8793][REST] hidden key containing secr...

2018-04-20 Thread sihuazhou
Github user sihuazhou commented on the issue:

https://github.com/apache/flink/pull/5854
  
Could anyone have a look at this? 


---


[jira] [Comment Edited] (FLINK-9211) Job submission via REST/dashboard does not work on Kubernetes

2018-04-20 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446057#comment-16446057
 ] 

Aljoscha Krettek edited comment on FLINK-9211 at 4/20/18 5:31 PM:
--

The problem is that some Kubernetes deployments don't allow a pod to connect to 
itself using a service that points at the pod itself.

In the Flink doc, we describe K8s deployments like this:
 - deploy JM
 - deploy JM service with name {{flink-jobmanager}}
 - deploy TMs

All components are configured to think that the hostname of the JM is 
{{flink-jobmanager}} and the TMs can actually connect to that via the service. 
However, the JM cannot connect to itself via that service. This is a problem 
because {{WebSubmissionException}}, {{JarRunHandler}} to be specific, will 
instantiate a {{RestClusterClient}} that tries to submit to 
{{[http://flink-jobmanager:8081|http://flink-jobmanager:8081/]}}. This will 
fail.

A simple fix is to change {{WebSubmissionExtension}} to use this:
{code:java}
// we always use localhost as the endpoint because we assume that the dispatcher
// and web submission extension are running on the same machine. We're using
// localhost because in some setups (for example some Kubernetes setups) a 
machine
// might not be able to refer back to itself via the externally announced 
hostname
final SettableLeaderRetrievalService settableLeaderRetrievalService =
new SettableLeaderRetrievalService("http://localhost:8081;, 
HighAvailabilityServices.DEFAULT_LEADER_ID);
{code}
However, this doesn't work if the port is changed in the config or if we use 
{{https}}. The code that constructs the proper url can be found here: 
[https://github.com/apache/flink/blob/4fa4e8cb364c35ea1807a051929b4604b9d31c2e/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestServerEndpoint.java#L207].


was (Author: aljoscha):
The problem is that some Kubernetes deployments don't allow a pod to connect to 
itself using a servicethat points at the pod itself.

In the Flink doc, we describe K8s deployments like this:
 - deploy JM
 - deploy JM service with name {{flink-jobmanager}}
 - deploy TMs

All components are configured to think that the hostname of the JM is 
{{flink-jobmanager}} and the TMs can actually connect to that via the service. 
However, the JM cannot connect to itself via that service. This is a problem 
because {{WebSubmissionException}}, {{JarRunHandler}} to be specific, will 
instantiate a {{RestClusterClient}} that tries to submit to 
{{[http://flink-jobmanager:8081|http://flink-jobmanager:8081/]}}. This will 
fail.

A simple fix is to change {{WebSubmissionExtension}} to use this:
{code:java}
// we always use localhost as the endpoint because we assume that the dispatcher
// and web submission extension are running on the same machine. We're using
// localhost because in some setups (for example some Kubernetes setups) a 
machine
// might not be able to refer back to itself via the externally announced 
hostname
final SettableLeaderRetrievalService settableLeaderRetrievalService =
new SettableLeaderRetrievalService("http://localhost:8081;, 
HighAvailabilityServices.DEFAULT_LEADER_ID);
{code}
However, this doesn't work if the port is changed in the config or if we use 
{{https}}. The code that constructs the proper url can be found here: 
[https://github.com/apache/flink/blob/4fa4e8cb364c35ea1807a051929b4604b9d31c2e/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestServerEndpoint.java#L207].

> Job submission via REST/dashboard does not work on Kubernetes
> -
>
> Key: FLINK-9211
> URL: https://issues.apache.org/jira/browse/FLINK-9211
> Project: Flink
>  Issue Type: Bug
>  Components: Client, Web Client
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0
>
>
> When setting up a cluster on Kubernets according to the documentation it is 
> possible to upload jar files but when trying to execute them you get an 
> exception like this:
> {code}
> org.apache.flink.runtime.rest.handler.RestHandlerException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
> at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$2(JarRunHandler.java:113)
> at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
> at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> 

[jira] [Commented] (FLINK-9211) Job submission via REST/dashboard does not work on Kubernetes

2018-04-20 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446057#comment-16446057
 ] 

Aljoscha Krettek commented on FLINK-9211:
-

The problem is that some Kubernetes deployments don't allow a pod to connect to 
itself using a servicethat points at the pod itself.

In the Flink doc, we describe K8s deployments like this:
 - deploy JM
 - deploy JM service with name {{flink-jobmanager}}
 - deploy TMs

All components are configured to think that the hostname of the JM is 
{{flink-jobmanager}} and the TMs can actually connect to that via the service. 
However, the JM cannot connect to itself via that service. This is a problem 
because {{WebSubmissionException}}, {{JarRunHandler}} to be specific, will 
instantiate a {{RestClusterClient}} that tries to submit to 
{{[http://flink-jobmanager:8081|http://flink-jobmanager:8081/]}}. This will 
fail.

A simple fix is to change {{WebSubmissionExtension}} to use this:
{code:java}
// we always use localhost as the endpoint because we assume that the dispatcher
// and web submission extension are running on the same machine. We're using
// localhost because in some setups (for example some Kubernetes setups) a 
machine
// might not be able to refer back to itself via the externally announced 
hostname
final SettableLeaderRetrievalService settableLeaderRetrievalService =
new SettableLeaderRetrievalService("http://localhost:8081;, 
HighAvailabilityServices.DEFAULT_LEADER_ID);
{code}
However, this doesn't work if the port is changed in the config or if we use 
{{https}}. The code that constructs the proper url can be found here: 
[https://github.com/apache/flink/blob/4fa4e8cb364c35ea1807a051929b4604b9d31c2e/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestServerEndpoint.java#L207].

> Job submission via REST/dashboard does not work on Kubernetes
> -
>
> Key: FLINK-9211
> URL: https://issues.apache.org/jira/browse/FLINK-9211
> Project: Flink
>  Issue Type: Bug
>  Components: Client, Web Client
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0
>
>
> When setting up a cluster on Kubernets according to the documentation it is 
> possible to upload jar files but when trying to execute them you get an 
> exception like this:
> {code}
> org.apache.flink.runtime.rest.handler.RestHandlerException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
> at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$2(JarRunHandler.java:113)
> at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
> at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:196)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
> JobGraph.
> at 
> 

[jira] [Commented] (FLINK-3089) State API Should Support Data Expiration (State TTL)

2018-04-20 Thread Bowen Li (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446049#comment-16446049
 ] 

Bowen Li commented on FLINK-3089:
-

right, i was planning to send one over the weekend

> State API Should Support Data Expiration (State TTL)
> 
>
> Key: FLINK-3089
> URL: https://issues.apache.org/jira/browse/FLINK-3089
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API, State Backends, Checkpointing
>Reporter: Niels Basjes
>Assignee: Bowen Li
>Priority: Major
>
> In some usecases (webanalytics) there is a need to have a state per visitor 
> on a website (i.e. keyBy(sessionid) ).
> At some point the visitor simply leaves and no longer creates new events (so 
> a special 'end of session' event will not occur).
> The only way to determine that a visitor has left is by choosing a timeout, 
> like "After 30 minutes no events we consider the visitor 'gone'".
> Only after this (chosen) timeout has expired should we discard this state.
> In the Trigger part of Windows we can set a timer and close/discard this kind 
> of information. But that introduces the buffering effect of the window (which 
> in some scenarios is unwanted).
> What I would like is to be able to set a timeout on a specific state which I 
> can update afterwards.
> This makes it possible to create a map function that assigns the right value 
> and that discards the state automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8836) Duplicating a KryoSerializer does not duplicate registered default serializers

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445981#comment-16445981
 ] 

ASF GitHub Bot commented on FLINK-8836:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5880


> Duplicating a KryoSerializer does not duplicate registered default serializers
> --
>
> Key: FLINK-8836
> URL: https://issues.apache.org/jira/browse/FLINK-8836
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{duplicate()}} method of the {{KryoSerializer}} is as following:
> {code:java}
> public KryoSerializer duplicate() {
>     return new KryoSerializer<>(this);
> }
> protected KryoSerializer(KryoSerializer toCopy) {
>     defaultSerializers = toCopy.defaultSerializers;
>     defaultSerializerClasses = toCopy.defaultSerializerClasses;
>     kryoRegistrations = toCopy.kryoRegistrations;
>     ...
> }
> {code}
> Shortly put, when duplicating a {{KryoSerializer}}, the 
> {{defaultSerializers}} serializer instances are directly provided to the new 
> {{KryoSerializer}} instance.
>  This causes the fact that those default serializers are shared across two 
> different {{KryoSerializer}} instances, and therefore not a correct duplicate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5880: [FLINK-8836] Fix duplicate method in KryoSerialize...

2018-04-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5880


---


[jira] [Commented] (FLINK-8689) Add runtime support of distinct filter using MapView

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445938#comment-16445938
 ] 

ASF GitHub Bot commented on FLINK-8689:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/
  
Thanks, that'd be great!


> Add runtime support of distinct filter using MapView 
> -
>
> Key: FLINK-8689
> URL: https://issues.apache.org/jira/browse/FLINK-8689
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> This ticket should cover distinct aggregate function support to codegen for 
> *AggregateCall*, where *isDistinct* fields is set to true.
> This can be verified using the following SQL, which is not currently 
> producing correct results.
> {code:java}
> SELECT
>   a,
>   SUM(b) OVER (PARTITION BY a ORDER BY proctime ROWS BETWEEN 5 PRECEDING AND 
> CURRENT ROW)
> FROM
>   MyTable{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5555: [FLINK-8689][table]Add runtime support of distinct filter...

2018-04-20 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/
  
Thanks, that'd be great!


---


[jira] [Updated] (FLINK-9229) Fix literal handling in code generation

2018-04-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-9229:

Affects Version/s: 1.4.2

> Fix literal handling in code generation
> ---
>
> Key: FLINK-9229
> URL: https://issues.apache.org/jira/browse/FLINK-9229
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.2
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>
> Information about expressions that are constant help during code generation. 
> Especially when moving often reused parts of code in the member area of a 
> generated function. Right now this behavior is not consistent because even 
> methods in {{generateFieldAccess}} generate literals but they are not 
> constant. This could lead to unintended behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9229) Fix literal handling in code generation

2018-04-20 Thread Timo Walther (JIRA)
Timo Walther created FLINK-9229:
---

 Summary: Fix literal handling in code generation
 Key: FLINK-9229
 URL: https://issues.apache.org/jira/browse/FLINK-9229
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Reporter: Timo Walther
Assignee: Timo Walther


Information about expressions that are constant help during code generation. 
Especially when moving often reused parts of code in the member area of a 
generated function. Right now this behavior is not consistent because even 
methods in {{generateFieldAccess}} generate literals but they are not constant. 
This could lead to unintended behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8689) Add runtime support of distinct filter using MapView

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445864#comment-16445864
 ] 

ASF GitHub Bot commented on FLINK-8689:
---

Github user walterddr commented on the issue:

https://github.com/apache/flink/pull/
  
Thanks @fhueske, I had the exact same feeling. Just attaching `MapState` 
towards the back of the `Row` might be a current working solution for now, but 
will probably be nasty to maintain in the future. 

I am planning to go ahead and create an optimization ticket further down 
once I had FLINK-8690 completed. What do you think?


> Add runtime support of distinct filter using MapView 
> -
>
> Key: FLINK-8689
> URL: https://issues.apache.org/jira/browse/FLINK-8689
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> This ticket should cover distinct aggregate function support to codegen for 
> *AggregateCall*, where *isDistinct* fields is set to true.
> This can be verified using the following SQL, which is not currently 
> producing correct results.
> {code:java}
> SELECT
>   a,
>   SUM(b) OVER (PARTITION BY a ORDER BY proctime ROWS BETWEEN 5 PRECEDING AND 
> CURRENT ROW)
> FROM
>   MyTable{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5555: [FLINK-8689][table]Add runtime support of distinct filter...

2018-04-20 Thread walterddr
Github user walterddr commented on the issue:

https://github.com/apache/flink/pull/
  
Thanks @fhueske, I had the exact same feeling. Just attaching `MapState` 
towards the back of the `Row` might be a current working solution for now, but 
will probably be nasty to maintain in the future. 

I am planning to go ahead and create an optimization ticket further down 
once I had FLINK-8690 completed. What do you think?


---


[jira] [Updated] (FLINK-8970) Add more automated end-to-end tests

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-8970:

Fix Version/s: 1.6.0

> Add more automated end-to-end tests
> ---
>
> Key: FLINK-8970
> URL: https://issues.apache.org/jira/browse/FLINK-8970
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.6.0
>
>
> In order to improve Flink's test coverage and make releasing easier, we 
> should add more automated end-to-end tests which test Flink more like a user 
> would interact with the system. Additionally, these end-to-end tests should 
> test the integration of various other systems with Flink.
> With FLINK-6539, we added a new module \{{flink-end-to-end-tests}} which 
> contains the set of currently available end-to-end tests.
> With FLINK-8911, a script was added to trigger these tests.
>  
> This issue is an umbrella issue collecting all different end-to-end tests 
> which we want to add to the Flink repository.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9100) Shadow/Hide password from configuration that is logged to log file

2018-04-20 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou reassigned FLINK-9100:
-

Assignee: Sihua Zhou

> Shadow/Hide password from configuration that is logged to log file
> --
>
> Key: FLINK-9100
> URL: https://issues.apache.org/jira/browse/FLINK-9100
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Szymon Szczypiński
>Assignee: Sihua Zhou
>Priority: Major
>
> I was thinking to add small improvement to Flink. I want to add feature that 
> will hide value for key containing phrase "password". I want to this only 
> when value is logged to log file.
> I want to this because of security reason, if someone need to monitor log 
> file then value from password key will be visible in that monitoring.
> I want to change class "GlobalConfiguration" and "SecurityOptions"
> In class "GlobalConfiguration" change line
> {color:#9876aa}LOG{color}.info({color:#6a8759}"Loading configuration 
> property: {}, {}"{color}{color:#cc7832}, {color}key{color:#cc7832}, 
> {color}value){color:#cc7832};{color}
> and add code that will check that if key contain phrase "password" than value 
> will be changed to for example "***".
> The change of value i want to make when new key in class "SecurityOptions" 
> will be set to true. This new key will identifies than password should be 
> shadowed/hidden.
> What you thing about that improvement?
> This improvement is similar to FLINK-8793 for REST component.
>   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8795) Scala shell broken for Flip6

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-8795:

Priority: Blocker  (was: Major)

> Scala shell broken for Flip6
> 
>
> Key: FLINK-8795
> URL: https://issues.apache.org/jira/browse/FLINK-8795
> Project: Flink
>  Issue Type: Bug
>Reporter: kant kodali
>Priority: Blocker
> Fix For: 1.6.0
>
>
> I am trying to run the simple code below after building everything from 
> Flink's github master branch for various reasons. I get an exception below 
> and I wonder what runs on port 9065? and How to fix this exception?
> I followed the instructions from the Flink master branch so I did the 
> following.
> {code:java}
> git clone https://github.com/apache/flink.git 
> cd flink mvn clean package -DskipTests 
> cd build-target
>  ./bin/start-scala-shell.sh local{code}
> {{And Here is the code I ran}}
> {code:java}
> val dataStream = senv.fromElements(1, 2, 3, 4)
> dataStream.countWindowAll(2).sum(0).print()
> senv.execute("My streaming program"){code}
> {{And I finally get this exception}}
> {code:java}
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph. at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$18(RestClusterClient.java:306)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$222(RestClient.java:196)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  at java.lang.Thread.run(Thread.java:745) Caused by: 
> java.util.concurrent.CompletionException: java.net.ConnectException: 
> Connection refused: localhost/127.0.0.1:9065 at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>  at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943) 
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>  ... 16 more Caused by: java.net.ConnectException: Connection refused: 
> localhost/127.0.0.1:9065 at sun.nio.ch.SocketChannelImpl.checkConnect(Native 
> Method) at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at 
> org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8795) Scala shell broken for Flip6

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-8795:

Fix Version/s: (was: 1.5.0)
   1.6.0

> Scala shell broken for Flip6
> 
>
> Key: FLINK-8795
> URL: https://issues.apache.org/jira/browse/FLINK-8795
> Project: Flink
>  Issue Type: Bug
>Reporter: kant kodali
>Priority: Blocker
> Fix For: 1.6.0
>
>
> I am trying to run the simple code below after building everything from 
> Flink's github master branch for various reasons. I get an exception below 
> and I wonder what runs on port 9065? and How to fix this exception?
> I followed the instructions from the Flink master branch so I did the 
> following.
> {code:java}
> git clone https://github.com/apache/flink.git 
> cd flink mvn clean package -DskipTests 
> cd build-target
>  ./bin/start-scala-shell.sh local{code}
> {{And Here is the code I ran}}
> {code:java}
> val dataStream = senv.fromElements(1, 2, 3, 4)
> dataStream.countWindowAll(2).sum(0).print()
> senv.execute("My streaming program"){code}
> {{And I finally get this exception}}
> {code:java}
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph. at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$18(RestClusterClient.java:306)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$222(RestClient.java:196)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  at java.lang.Thread.run(Thread.java:745) Caused by: 
> java.util.concurrent.CompletionException: java.net.ConnectException: 
> Connection refused: localhost/127.0.0.1:9065 at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>  at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943) 
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>  ... 16 more Caused by: java.net.ConnectException: Connection refused: 
> localhost/127.0.0.1:9065 at sun.nio.ch.SocketChannelImpl.checkConnect(Native 
> Method) at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at 
> org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8970) Add more automated end-to-end tests

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-8970:

Priority: Blocker  (was: Critical)

> Add more automated end-to-end tests
> ---
>
> Key: FLINK-8970
> URL: https://issues.apache.org/jira/browse/FLINK-8970
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.6.0
>
>
> In order to improve Flink's test coverage and make releasing easier, we 
> should add more automated end-to-end tests which test Flink more like a user 
> would interact with the system. Additionally, these end-to-end tests should 
> test the integration of various other systems with Flink.
> With FLINK-6539, we added a new module \{{flink-end-to-end-tests}} which 
> contains the set of currently available end-to-end tests.
> With FLINK-8911, a script was added to trigger these tests.
>  
> This issue is an umbrella issue collecting all different end-to-end tests 
> which we want to add to the Flink repository.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-5372) Fix RocksDBAsyncSnapshotTest.testCancelFullyAsyncCheckpoints()

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-5372:
---

Assignee: Stefan Richter

> Fix RocksDBAsyncSnapshotTest.testCancelFullyAsyncCheckpoints()
> --
>
> Key: FLINK-5372
> URL: https://issues.apache.org/jira/browse/FLINK-5372
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Stefan Richter
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.5.0, 1.4.3
>
>
> The test is currently {{@Ignored}}. We have to change 
> {{AsyncCheckpointOperator}} to make sure that we can run fully 
> asynchronously. Then, the test will still fail because the canceling 
> behaviour was changed in the meantime.
> {code}
> public static class AsyncCheckpointOperator
> extends AbstractStreamOperator
> implements OneInputStreamOperator {
> @Override
> public void open() throws Exception {
> super.open();
> // also get the state in open, this way we are sure that it was 
> created before
> // we trigger the test checkpoint
> ValueState state = getPartitionedState(
> VoidNamespace.INSTANCE,
> VoidNamespaceSerializer.INSTANCE,
> new ValueStateDescriptor<>("count",
> StringSerializer.INSTANCE, "hello"));
> }
> @Override
> public void processElement(StreamRecord element) throws Exception 
> {
> // we also don't care
> ValueState state = getPartitionedState(
> VoidNamespace.INSTANCE,
> VoidNamespaceSerializer.INSTANCE,
> new ValueStateDescriptor<>("count",
> StringSerializer.INSTANCE, "hello"));
> state.update(element.getValue());
> }
> @Override
> public void snapshotState(StateSnapshotContext context) throws Exception {
> // do nothing so that we don't block
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9101) HAQueryableStateRocksDBBackendITCase failed on travis

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-9101:

Fix Version/s: (was: 1.5.0)

> HAQueryableStateRocksDBBackendITCase failed on travis
> -
>
> Key: FLINK-9101
> URL: https://issues.apache.org/jira/browse/FLINK-9101
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: test-stability
>
> The test deadlocks on travis.
> https://travis-ci.org/zentol/flink/jobs/358894950



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8900) YARN FinalStatus always shows as KILLED with Flip-6

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-8900:
---

Assignee: Gary Yao  (was: Till Rohrmann)

> YARN FinalStatus always shows as KILLED with Flip-6
> ---
>
> Key: FLINK-8900
> URL: https://issues.apache.org/jira/browse/FLINK-8900
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Nico Kruber
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Whenever I run a simple simple word count like this one on YARN with Flip-6 
> enabled,
> {code}
> ./bin/flink run -m yarn-cluster -yjm 768 -ytm 3072 -ys 2 -p 20 -c 
> org.apache.flink.streaming.examples.wordcount.WordCount 
> ./examples/streaming/WordCount.jar --input /usr/share/doc/rsync-3.0.6/COPYING
> {code}
> it will show up as {{KILLED}} in the {{State}} and {{FinalStatus}} columns 
> even though the program ran successfully like this one (irrespective of 
> FLINK-8899 occurring or not):
> {code}
> 2018-03-08 16:48:39,049 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Job Streaming 
> WordCount (11a794d2f5dc2955d8015625ec300c20) switched from state RUNNING to 
> FINISHED.
> 2018-03-08 16:48:39,050 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping 
> checkpoint coordinator for job 11a794d2f5dc2955d8015625ec300c20
> 2018-03-08 16:48:39,050 INFO  
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore  - 
> Shutting down
> 2018-03-08 16:48:39,078 INFO  
> org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Job 
> 11a794d2f5dc2955d8015625ec300c20 reached globally terminal state FINISHED.
> 2018-03-08 16:48:39,151 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager e58efd886429e8f080815ea74ddfa734 at the SlotManager.
> 2018-03-08 16:48:39,221 INFO  org.apache.flink.runtime.jobmaster.JobMaster
>   - Stopping the JobMaster for job Streaming 
> WordCount(11a794d2f5dc2955d8015625ec300c20).
> 2018-03-08 16:48:39,270 INFO  org.apache.flink.runtime.jobmaster.JobMaster
>   - Close ResourceManager connection 
> 43f725adaee14987d3ff99380701f52f: JobManager is shutting down..
> 2018-03-08 16:48:39,270 INFO  org.apache.flink.yarn.YarnResourceManager   
>   - Disconnect job manager 
> 0...@akka.tcp://fl...@ip-172-31-7-0.eu-west-1.compute.internal:34281/user/jobmanager_0
>  for job 11a794d2f5dc2955d8015625ec300c20 from the resource manager.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  - Suspending 
> SlotPool.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.slotpool.SlotPool  - Stopping 
> SlotPool.
> 2018-03-08 16:48:39,349 INFO  
> org.apache.flink.runtime.jobmaster.JobManagerRunner   - 
> JobManagerRunner already shutdown.
> 2018-03-08 16:48:39,775 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager 4e1fb6c8f95685e24b6a4cb4b71ffb92 at the SlotManager.
> 2018-03-08 16:48:39,846 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager b5bce0bdfa7fbb0f4a0905cc3ee1c233 at the SlotManager.
> 2018-03-08 16:48:39,876 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - RECEIVED 
> SIGNAL 15: SIGTERM. Shutting down as requested.
> 2018-03-08 16:48:39,910 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager a35b0690fdc6ec38bbcbe18a965000fd at the SlotManager.
> 2018-03-08 16:48:39,942 INFO  
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager  - Register 
> TaskManager 5175cabe428bea19230ac056ff2a17bb at the SlotManager.
> 2018-03-08 16:48:39,974 INFO  org.apache.flink.runtime.blob.BlobServer
>   - Stopped BLOB server at 0.0.0.0:46511
> 2018-03-08 16:48:39,975 INFO  
> org.apache.flink.runtime.blob.TransientBlobCache  - Shutting down 
> BLOB cache
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9080) Flink Scheduler goes OOM, suspecting a memory leak

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-9080:
---

Assignee: Stefan Richter

> Flink Scheduler goes OOM, suspecting a memory leak
> --
>
> Key: FLINK-9080
> URL: https://issues.apache.org/jira/browse/FLINK-9080
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.4.0
>Reporter: Rohit Singh
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.5.0
>
> Attachments: Top Level packages.JPG, Top level classes.JPG, 
> classesloaded vs unloaded.png
>
>
> Running FLink version 1.4.0. on mesos,scheduler running along  with job 
> manager in single container, whereas task managers running in seperate 
> containers.
> Couple of jobs were running continously, Flink scheduler was working 
> properlyalong with task managers. Due to some change in data, one of the jobs 
> started failing continuously. In the meantime,there was a surge in  flink 
> scheduler memory usually eventually died out off OOM
>  
> Memory dump analysis was done, 
> Following were findings  !Top Level packages.JPG!!Top level classes.JPG!
>  *  Majority of top loaded packages retaining heap indicated towards 
> Flinkuserclassloader, glassfish(jersey library), Finalizer classes. (Top 
> level package image)
>  * Top level classes were of Flinkuserclassloader, (Top Level class image)
>  * The number of classes loaded vs unloaded was quite less  PFA,inspite of 
> adding jvm options of -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled , 
> PFAclassloaded vs unloaded graph, scheduler was restarted 3 times
>  * There were custom classes as well which were duplicated during subsequent 
> class uploads
> PFA all the images of heap dump.  Can you suggest some pointers on as to how 
> to overcome this issue.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9194) Finished jobs are not archived to HistoryServer

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-9194:

Fix Version/s: (was: 1.5.0)
   1.6.0

> Finished jobs are not archived to HistoryServer
> ---
>
> Key: FLINK-9194
> URL: https://issues.apache.org/jira/browse/FLINK-9194
> Project: Flink
>  Issue Type: Bug
>  Components: History Server, JobManager
>Affects Versions: 1.5.0
> Environment: Flink 2af481a
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.6.0
>
>
> In flip6 mode, jobs are not archived to the HistoryServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9148) when deploying flink on kubernetes, the taskmanager report "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name resolution"

2018-04-20 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445803#comment-16445803
 ] 

Aljoscha Krettek commented on FLINK-9148:
-

I moved this to 1.6.0 because we didn't hear any updates. Please move this back 
if there's new input.

> when deploying flink on kubernetes, the taskmanager report 
> "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
> resolution"
> --
>
> Key: FLINK-9148
> URL: https://issues.apache.org/jira/browse/FLINK-9148
> Project: Flink
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.4.2
> Environment: kubernetes 1.9
> docker 1.4
> see 
> :https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html
>Reporter: You Chu
>Priority: Blocker
> Fix For: 1.6.0
>
>
> refer to 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html:
> I deploy flink1.4 on kubernetes 1.9, the jobmanager container is ok, but the 
> taskmanager contains failed with error:
> java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
> resolution
>  at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
>  at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
>  at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
>  at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
>  at java.net.InetAddress.getAllByName(InetAddress.java:1192)
>  at java.net.InetAddress.getAllByName(InetAddress.java:1126)
>  at java.net.InetAddress.getByName(InetAddress.java:1076)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:172)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:137)
>  at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:79)
>  at 
> org.apache.flink.runtime.taskmanager.TaskManager$.selectNetworkInterfaceAndRunTaskManager(TaskManager.scala:1681)
>  at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1592)
>  at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1590)
>  at java.security.AccessController.doPrivileged(Native Method)
>  
> I know that the the 
> jobmanager-deployment.yaml
> taskmanager-deployment.yaml
> I know in flink docker image, it uses environment  
> {{JOB_MANAGER_RPC_ADDRESS=flink-jobmanager to resolve jobmanager address. 
> however in flink task container, it can't resolve the hostname 
> flink-jobmanager.
> Can anyone help me to fix it? Should I need to setup a DNS to resolve?}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9037) Test flake Kafka09ITCase#testCancelingEmptyTopic

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9037.
---
Resolution: Cannot Reproduce

Haven't seen this in a while, please reopen if there's new data.

> Test flake Kafka09ITCase#testCancelingEmptyTopic
> 
>
> Key: FLINK-9037
> URL: https://issues.apache.org/jira/browse/FLINK-9037
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stephan Ewen
>Priority: Blocker
>
> {code}
> Test 
> testCancelingEmptyTopic(org.apache.flink.streaming.connectors.kafka.Kafka09ITCase)
>  failed with:
> org.junit.runners.model.TestTimedOutException: test timed out after 6 
> milliseconds
> {code}
> Full log: https://api.travis-ci.org/v3/job/356044885/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9101) HAQueryableStateRocksDBBackendITCase failed on travis

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9101.
---
Resolution: Cannot Reproduce

Haven't seen this for a while, please reopen if there's new data.

> HAQueryableStateRocksDBBackendITCase failed on travis
> -
>
> Key: FLINK-9101
> URL: https://issues.apache.org/jira/browse/FLINK-9101
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: test-stability
>
> The test deadlocks on travis.
> https://travis-ci.org/zentol/flink/jobs/358894950



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9159) Sanity check default timeout values

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-9159:
---

Assignee: Fabian Hueske  (was: vinoyang)

> Sanity check default timeout values
> ---
>
> Key: FLINK-9159
> URL: https://issues.apache.org/jira/browse/FLINK-9159
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Fabian Hueske
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Check that the default timeout values for resource release are sanely chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9037) Test flake Kafka09ITCase#testCancelingEmptyTopic

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-9037:

Fix Version/s: (was: 1.5.0)

> Test flake Kafka09ITCase#testCancelingEmptyTopic
> 
>
> Key: FLINK-9037
> URL: https://issues.apache.org/jira/browse/FLINK-9037
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stephan Ewen
>Priority: Blocker
>
> {code}
> Test 
> testCancelingEmptyTopic(org.apache.flink.streaming.connectors.kafka.Kafka09ITCase)
>  failed with:
> org.junit.runners.model.TestTimedOutException: test timed out after 6 
> milliseconds
> {code}
> Full log: https://api.travis-ci.org/v3/job/356044885/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9194) Finished jobs are not archived to HistoryServer

2018-04-20 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445807#comment-16445807
 ] 

Aljoscha Krettek commented on FLINK-9194:
-

Moved this to 1.6.0 to unblock the release, please discuss if this is urgent 
for you.

> Finished jobs are not archived to HistoryServer
> ---
>
> Key: FLINK-9194
> URL: https://issues.apache.org/jira/browse/FLINK-9194
> Project: Flink
>  Issue Type: Bug
>  Components: History Server, JobManager
>Affects Versions: 1.5.0
> Environment: Flink 2af481a
>Reporter: Gary Yao
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.6.0
>
>
> In flip6 mode, jobs are not archived to the HistoryServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9148) when deploying flink on kubernetes, the taskmanager report "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name resolution"

2018-04-20 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-9148:

Fix Version/s: (was: 1.5.0)
   1.6.0

> when deploying flink on kubernetes, the taskmanager report 
> "java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
> resolution"
> --
>
> Key: FLINK-9148
> URL: https://issues.apache.org/jira/browse/FLINK-9148
> Project: Flink
>  Issue Type: Bug
>  Components: Docker
>Affects Versions: 1.4.2
> Environment: kubernetes 1.9
> docker 1.4
> see 
> :https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html
>Reporter: You Chu
>Priority: Blocker
> Fix For: 1.6.0
>
>
> refer to 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html:
> I deploy flink1.4 on kubernetes 1.9, the jobmanager container is ok, but the 
> taskmanager contains failed with error:
> java.net.UnknownHostException: flink-jobmanager: Temporary failure in name 
> resolution
>  at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
>  at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
>  at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
>  at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
>  at java.net.InetAddress.getAllByName(InetAddress.java:1192)
>  at java.net.InetAddress.getAllByName(InetAddress.java:1126)
>  at java.net.InetAddress.getByName(InetAddress.java:1076)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:172)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.getRpcUrl(AkkaRpcServiceUtils.java:137)
>  at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:79)
>  at 
> org.apache.flink.runtime.taskmanager.TaskManager$.selectNetworkInterfaceAndRunTaskManager(TaskManager.scala:1681)
>  at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1592)
>  at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anon$2.call(TaskManager.scala:1590)
>  at java.security.AccessController.doPrivileged(Native Method)
>  
> I know that the the 
> jobmanager-deployment.yaml
> taskmanager-deployment.yaml
> I know in flink docker image, it uses environment  
> {{JOB_MANAGER_RPC_ADDRESS=flink-jobmanager to resolve jobmanager address. 
> however in flink task container, it can't resolve the hostname 
> flink-jobmanager.
> Can anyone help me to fix it? Should I need to setup a DNS to resolve?}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5880: [FLINK-8836] Fix duplicate method in KryoSerializer to pe...

2018-04-20 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/5880
  
Thanks for the comments, will fix the names and then merge.


---


[jira] [Commented] (FLINK-8836) Duplicating a KryoSerializer does not duplicate registered default serializers

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445763#comment-16445763
 ] 

ASF GitHub Bot commented on FLINK-8836:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/5880
  
Thanks for the comments, will fix the names and then merge.


> Duplicating a KryoSerializer does not duplicate registered default serializers
> --
>
> Key: FLINK-8836
> URL: https://issues.apache.org/jira/browse/FLINK-8836
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{duplicate()}} method of the {{KryoSerializer}} is as following:
> {code:java}
> public KryoSerializer duplicate() {
>     return new KryoSerializer<>(this);
> }
> protected KryoSerializer(KryoSerializer toCopy) {
>     defaultSerializers = toCopy.defaultSerializers;
>     defaultSerializerClasses = toCopy.defaultSerializerClasses;
>     kryoRegistrations = toCopy.kryoRegistrations;
>     ...
> }
> {code}
> Shortly put, when duplicating a {{KryoSerializer}}, the 
> {{defaultSerializers}} serializer instances are directly provided to the new 
> {{KryoSerializer}} instance.
>  This causes the fact that those default serializers are shared across two 
> different {{KryoSerializer}} instances, and therefore not a correct duplicate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8836) Duplicating a KryoSerializer does not duplicate registered default serializers

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445761#comment-16445761
 ] 

ASF GitHub Bot commented on FLINK-8836:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5880
  
Yes, +1 after that is fixed.


> Duplicating a KryoSerializer does not duplicate registered default serializers
> --
>
> Key: FLINK-8836
> URL: https://issues.apache.org/jira/browse/FLINK-8836
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{duplicate()}} method of the {{KryoSerializer}} is as following:
> {code:java}
> public KryoSerializer duplicate() {
>     return new KryoSerializer<>(this);
> }
> protected KryoSerializer(KryoSerializer toCopy) {
>     defaultSerializers = toCopy.defaultSerializers;
>     defaultSerializerClasses = toCopy.defaultSerializerClasses;
>     kryoRegistrations = toCopy.kryoRegistrations;
>     ...
> }
> {code}
> Shortly put, when duplicating a {{KryoSerializer}}, the 
> {{defaultSerializers}} serializer instances are directly provided to the new 
> {{KryoSerializer}} instance.
>  This causes the fact that those default serializers are shared across two 
> different {{KryoSerializer}} instances, and therefore not a correct duplicate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5880: [FLINK-8836] Fix duplicate method in KryoSerializer to pe...

2018-04-20 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5880
  
Yes, +1 after that is fixed.


---


[jira] [Created] (FLINK-9228) log details about task fail/task manager is shutting down

2018-04-20 Thread makeyang (JIRA)
makeyang created FLINK-9228:
---

 Summary: log details about task fail/task manager is shutting down
 Key: FLINK-9228
 URL: https://issues.apache.org/jira/browse/FLINK-9228
 Project: Flink
  Issue Type: Improvement
  Components: Logging
Affects Versions: 1.4.2
Reporter: makeyang
Assignee: makeyang
 Fix For: 1.4.3, 1.5.1


condition:

flink version:1.4.2

jdk version:1.8.0.20

linux version:3.10.0

problem description:

one of my task manager is out of the cluster and I checked its log found 
something below: 
2018-04-19 22:34:47,441 INFO  org.apache.flink.runtime.taskmanager.Task         
            
- Attempting to fail task externally Process (115/120) 
(19d0b0ce1ef3b8023b37bdfda643ef44). 
2018-04-19 22:34:47,441 INFO  org.apache.flink.runtime.taskmanager.Task         
            
- Process (115/120) (19d0b0ce1ef3b8023b37bdfda643ef44) switched from RUNNING 
to FAILED. 
java.lang.Exception: TaskManager is shutting down. 
        at 
org.apache.flink.runtime.taskmanager.TaskManager.postStop(TaskManager.scala:220)
 
        at akka.actor.Actor$class.aroundPostStop(Actor.scala:515) 
        at 
org.apache.flink.runtime.taskmanager.TaskManager.aroundPostStop(TaskManager.scala:121)
 
        at 
akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
 
        at 
akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172) 
        at akka.actor.ActorCell.terminate(ActorCell.scala:374) 
        at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:467) 
        at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483) 
        at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282) 
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:260) 
        at akka.dispatch.Mailbox.run(Mailbox.scala:224) 
        at akka.dispatch.Mailbox.exec(Mailbox.scala:234) 
        at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
        at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
        at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 

suggestion:
 # short term suggestion:
 ## log reasons why task tail?maybe received some event from job manager/can't 
connect to job manager? operator exception? the more claritify the better
 ## log reasons why task manager is shutting down? received some event from job 
manager/can't connect to job manager? operator exception can't be recovery?
 # long term suggestion:
 ## define the state machine of flink node clearly. if nothing happens, the 
node should stay what it used to be, which means if it is processing events, if 
nothing happens, it should still processing events.or in other words, if its 
state changes from processing event to cancel, then event happens.
 ## define the events which can cause node state changed clearly. like use 
cancel, operator exception, heart beat timeout etc
 ## log the state change and event which cause state chaged clearly in logs
 ## show event details(time, node, event, state changed etc) in webui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8715) RocksDB does not propagate reconfiguration of serializer to the states

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445733#comment-16445733
 ] 

ASF GitHub Bot commented on FLINK-8715:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/5885

[FLINK-8715] Remove usage of StateDescriptor in state handles

## What is the purpose of the change

This PR is WIP, and is still lacking test coverage.
It is opened now to collect some feedback for a proposed solution for 
FLINK-8715.

Previously, reconfigured state serializers on restore were not properly 
forwarded to the state handles. In the past, the `StateDescriptor` served as 
the holder for the reconfigured serializer.
However, since 88ffad27, `StateDescriptor#getSerializer()` started giving 
out duplicates of the serializer, which caused reconfigured serializers to be a 
completely different copy then what the state handles were using.

This fix corrects this by explicitly forwarding the serializer to the 
instantiated state handles after the state is registered at the state backend. 
It also eliminates the use of `StateDescriptor`s internally in the state 
handles, so that the behaviour is independent of the 
`StateDescriptor#getSerializer()` method's implementation.

The alternative to this approach is to have an internal `setSerializer` 
method on the `StateDescriptor`, which should be used after state serializers 
are reconfigured on registration.
Then, that assures that handed out serializers by the descriptor are always 
reconfigured, as soon as the descriptor is registered at the backend.

## Brief change log

- Remove `StateDescriptor`s from heap / RocksDB state handle classes
- Forwards state serializer and any other necessary information provided by 
the state descriptor (e.g. default value, user functions, nested serializers, 
etc.) when instantiating state handles.

## Verifying this change

This fix still lacks test coverage.
It has been opened to collect feedback for the approach.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / (**no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (**yes** / no / don't know)
  - The runtime per-record code paths (performance sensitive): (**yes** / 
no / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-8715

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5885.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5885


commit c092dd6518d9e6f47f4cfc797c18bedc8a89cc05
Author: Tzu-Li (Gordon) Tai 
Date:   2018-04-20T13:15:42Z

[FLINK-8715] Remove usage of StateDescriptor in state handles




> RocksDB does not propagate reconfiguration of serializer to the states
> --
>
> Key: FLINK-8715
> URL: https://issues.apache.org/jira/browse/FLINK-8715
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.2
>Reporter: Arvid Heise
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0
>
>
> Any changes to the serializer done in #ensureCompability are lost during the 
> state creation.
> In particular, 
> [https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBValueState.java#L68]
>  always uses a fresh copy of the StateDescriptor.
> An easy fix is to pass the reconfigured serializer as an additional parameter 
> in 
> [https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java#L1681]
>  , which can be retrieved through the side-output of getColumnFamily
> {code:java}
> kvStateInformation.get(stateDesc.getName()).f1.getStateSerializer()
> {code}
> I encountered it in 1.3.2 but the code in the master seems unchanged (hence 

[GitHub] flink pull request #5885: [FLINK-8715] Remove usage of StateDescriptor in st...

2018-04-20 Thread tzulitai
GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/5885

[FLINK-8715] Remove usage of StateDescriptor in state handles

## What is the purpose of the change

This PR is WIP, and is still lacking test coverage.
It is opened now to collect some feedback for a proposed solution for 
FLINK-8715.

Previously, reconfigured state serializers on restore were not properly 
forwarded to the state handles. In the past, the `StateDescriptor` served as 
the holder for the reconfigured serializer.
However, since 88ffad27, `StateDescriptor#getSerializer()` started giving 
out duplicates of the serializer, which caused reconfigured serializers to be a 
completely different copy then what the state handles were using.

This fix corrects this by explicitly forwarding the serializer to the 
instantiated state handles after the state is registered at the state backend. 
It also eliminates the use of `StateDescriptor`s internally in the state 
handles, so that the behaviour is independent of the 
`StateDescriptor#getSerializer()` method's implementation.

The alternative to this approach is to have an internal `setSerializer` 
method on the `StateDescriptor`, which should be used after state serializers 
are reconfigured on registration.
Then, that assures that handed out serializers by the descriptor are always 
reconfigured, as soon as the descriptor is registered at the backend.

## Brief change log

- Remove `StateDescriptor`s from heap / RocksDB state handle classes
- Forwards state serializer and any other necessary information provided by 
the state descriptor (e.g. default value, user functions, nested serializers, 
etc.) when instantiating state handles.

## Verifying this change

This fix still lacks test coverage.
It has been opened to collect feedback for the approach.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / (**no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (**yes** / no / don't know)
  - The runtime per-record code paths (performance sensitive): (**yes** / 
no / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink FLINK-8715

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5885.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5885


commit c092dd6518d9e6f47f4cfc797c18bedc8a89cc05
Author: Tzu-Li (Gordon) Tai 
Date:   2018-04-20T13:15:42Z

[FLINK-8715] Remove usage of StateDescriptor in state handles




---


[jira] [Commented] (FLINK-8690) Update logical rule set to generate FlinkLogicalAggregate explicitly allow distinct agg on DataStream

2018-04-20 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445682#comment-16445682
 ] 

Fabian Hueske commented on FLINK-8690:
--

Hi [~walterddr] and [~hequn8128],

Thanks for the good discussion!
I think there are two ways to solve this issue:

1. Separate logical optimization rule sets for batch and streaming as you did 
in your branch.
2. Move the {{AggregateExpandDistinctAggregatesRule.JOIN}} rule to the existing 
batch normalization rules, generalize {{FlinkLogicalAggregateConverter}} to 
accept {{DISTINCT}} aggregates, and guard for {{DISTINCT}} aggregations in the 
batch physical translation. 

Both approaches will work, but I think the second approach is a bit cleaner 
because doesn't rely on the cost pruning.

What do you think?


> Update logical rule set to generate FlinkLogicalAggregate explicitly allow 
> distinct agg on DataStream
> -
>
> Key: FLINK-8690
> URL: https://issues.apache.org/jira/browse/FLINK-8690
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Currently, *FlinkLogicalAggregate / FlinkLogicalWindowAggregate* does not 
> allow distinct aggregate.
> We are proposing to reuse distinct aggregate codegen work designed for 
> *FlinkLogicalOverAggregate*, to support unbounded distinct aggregation on 
> datastream as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8689) Add runtime support of distinct filter using MapView

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445656#comment-16445656
 ] 

ASF GitHub Bot commented on FLINK-8689:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/
  
Thanks for the pointer to FLINK-8690. I'll have a look and comment there.

Regarding the question of whether to directly embed `MapState` in the 
generated code and remove the `DistinctAccumulator` or not, I think we should 
keep it as it is for now.

We would have to add a field to the `Row` that stores the accumulators to 
be able to store the `HashMap` which are used in the non-statebacked case. 
Also, would need to change the return type of functions that return partial 
aggregates to include the distinct maps. In the current approach, we have 
everything nicely encapsulated in each accumulator. We can optimize the 
solution later in a separate issue.


> Add runtime support of distinct filter using MapView 
> -
>
> Key: FLINK-8689
> URL: https://issues.apache.org/jira/browse/FLINK-8689
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> This ticket should cover distinct aggregate function support to codegen for 
> *AggregateCall*, where *isDistinct* fields is set to true.
> This can be verified using the following SQL, which is not currently 
> producing correct results.
> {code:java}
> SELECT
>   a,
>   SUM(b) OVER (PARTITION BY a ORDER BY proctime ROWS BETWEEN 5 PRECEDING AND 
> CURRENT ROW)
> FROM
>   MyTable{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5555: [FLINK-8689][table]Add runtime support of distinct filter...

2018-04-20 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/
  
Thanks for the pointer to FLINK-8690. I'll have a look and comment there.

Regarding the question of whether to directly embed `MapState` in the 
generated code and remove the `DistinctAccumulator` or not, I think we should 
keep it as it is for now.

We would have to add a field to the `Row` that stores the accumulators to 
be able to store the `HashMap` which are used in the non-statebacked case. 
Also, would need to change the return type of functions that return partial 
aggregates to include the distinct maps. In the current approach, we have 
everything nicely encapsulated in each accumulator. We can optimize the 
solution later in a separate issue.


---


[GitHub] flink issue #5327: [FLINK-8428] [table] Implement stream-stream non-window l...

2018-04-20 Thread hequn8128
Github user hequn8128 commented on the issue:

https://github.com/apache/flink/pull/5327
  
@twalthr Hi, Great to see your review and valuable suggestions. I will 
update my pr late next week(maybe next weekend).  Thanks very much.


---


[jira] [Commented] (FLINK-8428) Implement stream-stream non-window left outer join

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445652#comment-16445652
 ] 

ASF GitHub Bot commented on FLINK-8428:
---

Github user hequn8128 commented on the issue:

https://github.com/apache/flink/pull/5327
  
@twalthr Hi, Great to see your review and valuable suggestions. I will 
update my pr late next week(maybe next weekend).  Thanks very much.


> Implement stream-stream non-window left outer join
> --
>
> Key: FLINK-8428
> URL: https://issues.apache.org/jira/browse/FLINK-8428
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> Implement stream-stream non-window left outer join for sql/table-api. A 
> simple design doc can be found 
> [here|https://docs.google.com/document/d/1u7bJHeEBP_hFhi8Jm4oT3FqQDOm2pJDqCtq1U1WMHDo/edit?usp=sharing]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445640#comment-16445640
 ] 

mingleizhang commented on FLINK-9185:
-

OKay. I can give this to you. You can push a PR to here.

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-20 Thread Stephen Jason (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445635#comment-16445635
 ] 

Stephen Jason commented on FLINK-9185:
--

Could you give this issue to me [~mingleizhang]

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445566#comment-16445566
 ] 

ASF GitHub Bot commented on FLINK-9227:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5884


> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5884: [FLINK-9227] [test] Add Bucketing e2e test script ...

2018-04-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5884


---


[jira] [Closed] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-9227.
---
Resolution: Fixed

Fixed in 1.6.0: 321039f783cd5ba0b2a936e6fac7cca3c542ef3b
Fixed in 1.5.0: f47c201f80bec5c95eadbbb885638af678d68cc2

> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8286) Investigate Flink-Yarn-Kerberos integration for flip-6

2018-04-20 Thread Shuyi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445560#comment-16445560
 ] 

Shuyi Chen commented on FLINK-8286:
---

Hi [~till.rohrmann] and [~aljoscha], the context is that there is a regression 
in flink kerberos yarn integration in 1.4, which is addressed in 
[FLINK-8275|https://issues.apache.org/jira/browse/FLINK-8275]. This task is 
created at that time to make sure that there is no regression on flip6 as well. 
I'll take a look the next few days.

Also, can you point me to some existing integration tests for flip6 deployment 
that actually run a streaming/batch job on a mini-YARN cluster? Thanks.

> Investigate Flink-Yarn-Kerberos integration for flip-6
> --
>
> Key: FLINK-8286
> URL: https://issues.apache.org/jira/browse/FLINK-8286
> Project: Flink
>  Issue Type: Task
>  Components: Security
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> We've found some issues with the Flink-Yarn-Kerberos integration in the 
> current deployment model, we will need to investigate and test it for flip-6 
> when it's ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445559#comment-16445559
 ] 

ASF GitHub Bot commented on FLINK-9227:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5884
  
Thank you @zhangminglei. Will merge...


> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5884: [FLINK-9227] [test] Add Bucketing e2e test script to run-...

2018-04-20 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5884
  
Thank you @zhangminglei. Will merge...


---


[jira] [Commented] (FLINK-6926) Add support for MD5, SHA1 and SHA2

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445553#comment-16445553
 ] 

ASF GitHub Bot commented on FLINK-6926:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5324
  
Thank you @genged. I will have a final pass over the code and merge this.


> Add support for MD5, SHA1 and SHA2
> --
>
> Key: FLINK-6926
> URL: https://issues.apache.org/jira/browse/FLINK-6926
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Michael Gendelman
>Priority: Major
> Fix For: 1.5.0
>
>
> MD5(str)Calculates an MD5 128-bit checksum for the string. The value is 
> returned as a string of 32 hexadecimal digits, or NULL if the argument was 
> NULL. The return value can, for example, be used as a hash key. See the notes 
> at the beginning of this section about storing hash values efficiently.
> The return value is a nonbinary string in the connection character set.
> * Example:
>  MD5('testing') - 'ae2b1fca515949e5d54fb22b8ed95575'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_sha1]
> SHA1(str), SHA(str)Calculates an SHA-1 160-bit checksum for the string, as 
> described in RFC 3174 (Secure Hash Algorithm). The value is returned as a 
> string of 40 hexadecimal digits, or NULL if the argument was NULL. One of the 
> possible uses for this function is as a hash key. See the notes at the 
> beginning of this section about storing hash values efficiently. You can also 
> use SHA1() as a cryptographic function for storing passwords. SHA() is 
> synonymous with SHA1().
> The return value is a nonbinary string in the connection character set.
> * Example:
>   SHA1('abc') -> 'a9993e364706816aba3e25717850c26c9cd0d89d'
> SHA2(str, hash_length)Calculates the SHA-2 family of hash functions (SHA-224, 
> SHA-256, SHA-384, and SHA-512). The first argument is the cleartext string to 
> be hashed. The second argument indicates the desired bit length of the 
> result, which must have a value of 224, 256, 384, 512, or 0 (which is 
> equivalent to 256). If either argument is NULL or the hash length is not one 
> of the permitted values, the return value is NULL. Otherwise, the function 
> result is a hash value containing the desired number of bits. See the notes 
> at the beginning of this section about storing hash values efficiently.
> The return value is a nonbinary string in the connection character set.
> * Example:
> SHA2('abc', 224) -> '23097d223405d8228642a477bda255b32aadbce4bda0b3f7e36c9da7'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_sha2]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5324: [FLINK-6926] [table] Add support for SHA-224, SHA-384, SH...

2018-04-20 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/5324
  
Thank you @genged. I will have a final pass over the code and merge this.


---


[jira] [Commented] (FLINK-8836) Duplicating a KryoSerializer does not duplicate registered default serializers

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445550#comment-16445550
 ] 

ASF GitHub Bot commented on FLINK-8836:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/5880
  
@sihuazhou I think it can and should also go into 1.4.

@aljoscha is that a +1 once I have fixed the method names?


> Duplicating a KryoSerializer does not duplicate registered default serializers
> --
>
> Key: FLINK-8836
> URL: https://issues.apache.org/jira/browse/FLINK-8836
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The {{duplicate()}} method of the {{KryoSerializer}} is as following:
> {code:java}
> public KryoSerializer duplicate() {
>     return new KryoSerializer<>(this);
> }
> protected KryoSerializer(KryoSerializer toCopy) {
>     defaultSerializers = toCopy.defaultSerializers;
>     defaultSerializerClasses = toCopy.defaultSerializerClasses;
>     kryoRegistrations = toCopy.kryoRegistrations;
>     ...
> }
> {code}
> Shortly put, when duplicating a {{KryoSerializer}}, the 
> {{defaultSerializers}} serializer instances are directly provided to the new 
> {{KryoSerializer}} instance.
>  This causes the fact that those default serializers are shared across two 
> different {{KryoSerializer}} instances, and therefore not a correct duplicate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5880: [FLINK-8836] Fix duplicate method in KryoSerializer to pe...

2018-04-20 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/5880
  
@sihuazhou I think it can and should also go into 1.4.

@aljoscha is that a +1 once I have fixed the method names?


---


[jira] [Assigned] (FLINK-9202) AvroSerializer should not be serializing the target Avro type class

2018-04-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-9202:
---

Assignee: (was: Timo Walther)

> AvroSerializer should not be serializing the target Avro type class
> ---
>
> Key: FLINK-9202
> URL: https://issues.apache.org/jira/browse/FLINK-9202
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Critical
>
> The {{AvroSerializer}} contains this field which is written when the 
> serializer is written into savepoints:
> [https://github.com/apache/flink/blob/be7c89596a3b9cd8805a90aaf32336ec2759a1f7/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSerializer.java#L78]
> This causes Avro schema evolution to not work properly, because Avro 
> generated classes have non-fixed serialVersionUIDs. Once a new Avro class is 
> generated with a new schema, that class can not be loaded on restore due to 
> incompatible UIDs, and thus the serializer can not be successfully 
> deserialized.
> A possible solution would be to only write the classname, and dynamically 
> load the class into a transient field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9202) AvroSerializer should not be serializing the target Avro type class

2018-04-20 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445538#comment-16445538
 ] 

Timo Walther commented on FLINK-9202:
-

This issue is not easy to fix as it touches the problem of properly restoring 
serializers from serialization config snapshots. I think we won't change this 
for Flink 1.5 but as part of Flink 1.6 where schema/serializer evolution is a 
big point on the feature list.

I already prepared some test data and code that can be reused: 
https://github.com/twalthr/flink/tree/FLINK-9202

> AvroSerializer should not be serializing the target Avro type class
> ---
>
> Key: FLINK-9202
> URL: https://issues.apache.org/jira/browse/FLINK-9202
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Timo Walther
>Priority: Critical
>
> The {{AvroSerializer}} contains this field which is written when the 
> serializer is written into savepoints:
> [https://github.com/apache/flink/blob/be7c89596a3b9cd8805a90aaf32336ec2759a1f7/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSerializer.java#L78]
> This causes Avro schema evolution to not work properly, because Avro 
> generated classes have non-fixed serialVersionUIDs. Once a new Avro class is 
> generated with a new schema, that class can not be loaded on restore due to 
> incompatible UIDs, and thus the serializer can not be successfully 
> deserialized.
> A possible solution would be to only write the classname, and dynamically 
> load the class into a transient field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445521#comment-16445521
 ] 

ASF GitHub Bot commented on FLINK-9227:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5884
  
@twalthr Could you take a look ? Thank you.


> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5884: [FLINK-9227] [test] Add Bucketing e2e test script to run-...

2018-04-20 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5884
  
@twalthr Could you take a look ? Thank you.


---


[jira] [Commented] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445498#comment-16445498
 ] 

ASF GitHub Bot commented on FLINK-9227:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/5884

[FLINK-9227] [test] Add Bucketing e2e test script to run-nightly-test…


## What is the purpose of the change
Add Bucketing e2e test script to run-nightly-tests.sh

## Brief change log
Add bucketingsink e2e test script to run-nightly-tests.sh

## Verifying this change
Run  run-nightly-tests.sh by manually.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-9227

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5884.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5884


commit a441d288dfc9cdfdc073240436cb3bddb418459c
Author: zhangminglei 
Date:   2018-04-20T08:42:41Z

[FLINK-9227] [test] Add Bucketing e2e test script to run-nightly-tests.sh




> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5884: [FLINK-9227] [test] Add Bucketing e2e test script ...

2018-04-20 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/5884

[FLINK-9227] [test] Add Bucketing e2e test script to run-nightly-test…


## What is the purpose of the change
Add Bucketing e2e test script to run-nightly-tests.sh

## Brief change log
Add bucketingsink e2e test script to run-nightly-tests.sh

## Verifying this change
Run  run-nightly-tests.sh by manually.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-9227

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5884.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5884


commit a441d288dfc9cdfdc073240436cb3bddb418459c
Author: zhangminglei 
Date:   2018-04-20T08:42:41Z

[FLINK-9227] [test] Add Bucketing e2e test script to run-nightly-tests.sh




---


[jira] [Updated] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9227:

Component/s: Tests

> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9227:

Fix Version/s: 1.5.0

> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9227:
---

 Summary: Add Bucketing e2e test script to run-nightly-tests.sh
 Key: FLINK-9227
 URL: https://issues.apache.org/jira/browse/FLINK-9227
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
now, the latter will be executed by manually verifying a release or making sure 
that the tests pass.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9226) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9226:
---

 Summary: Add Bucketing e2e test script to run-nightly-tests.sh
 Key: FLINK-9226
 URL: https://issues.apache.org/jira/browse/FLINK-9226
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
now, the latter will be executed by manually verifying a release or making sure 
that the tests pass.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445487#comment-16445487
 ] 

mingleizhang commented on FLINK-9224:
-

Thanks [~twalthr]. :) 

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445474#comment-16445474
 ] 

Timo Walther commented on FLINK-9224:
-

[~mingleizhang] yes, good point. It should be added there. Feel free to open a 
PR for it :)

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9168) Pulsar Sink Connector

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445410#comment-16445410
 ] 

ASF GitHub Bot commented on FLINK-9168:
---

Github user sijie commented on the issue:

https://github.com/apache/flink/pull/5845
  
@tzulitai thank you very much for you help. just sent an email to 
dev@flink. look forward to feedback from flink community and collaboration 
between flink and pulsar communities.


> Pulsar Sink Connector
> -
>
> Key: FLINK-9168
> URL: https://issues.apache.org/jira/browse/FLINK-9168
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Affects Versions: 1.6.0
>Reporter: Zongyang Xiao
>Priority: Minor
> Fix For: 1.6.0
>
>
> Flink does not provide a sink connector for Pulsar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5845: [FLINK-9168][flink-connectors]Pulsar Sink connector

2018-04-20 Thread sijie
Github user sijie commented on the issue:

https://github.com/apache/flink/pull/5845
  
@tzulitai thank you very much for you help. just sent an email to 
dev@flink. look forward to feedback from flink community and collaboration 
between flink and pulsar communities.


---


[jira] [Commented] (FLINK-3089) State API Should Support Data Expiration (State TTL)

2018-04-20 Thread Sihua Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445405#comment-16445405
 ] 

Sihua Zhou commented on FLINK-3089:
---

Hi [~phoenixjiangnan], thanks for the doc, I think TTL is a highly anticipated 
feature that many users hungry for (we can refer it from the number of 
watchers), instead of only having discussion here, maybe it better to fire a 
DISCUSSION on the dev mail. (this issue is too old that maybe some people who 
care about it but didn't watch it...)

> State API Should Support Data Expiration (State TTL)
> 
>
> Key: FLINK-3089
> URL: https://issues.apache.org/jira/browse/FLINK-3089
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API, State Backends, Checkpointing
>Reporter: Niels Basjes
>Assignee: Bowen Li
>Priority: Major
>
> In some usecases (webanalytics) there is a need to have a state per visitor 
> on a website (i.e. keyBy(sessionid) ).
> At some point the visitor simply leaves and no longer creates new events (so 
> a special 'end of session' event will not occur).
> The only way to determine that a visitor has left is by choosing a timeout, 
> like "After 30 minutes no events we consider the visitor 'gone'".
> Only after this (chosen) timeout has expired should we discard this state.
> In the Trigger part of Windows we can set a timer and close/discard this kind 
> of information. But that introduces the buffering effect of the window (which 
> in some scenarios is unwanted).
> What I would like is to be able to set a timeout on a specific state which I 
> can update afterwards.
> This makes it possible to create a map function that assigns the right value 
> and that discards the state automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5883: [FLINK-9225][core] Minor code comments fix in Runt...

2018-04-20 Thread binlijin
GitHub user binlijin reopened a pull request:

https://github.com/apache/flink/pull/5883

[FLINK-9225][core] Minor code comments fix in RuntimeContext

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/binlijin/flink FLINK-9225

Alternatively you can review and apply these changes as the patch at:


[jira] [Commented] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445404#comment-16445404
 ] 

ASF GitHub Bot commented on FLINK-9225:
---

GitHub user binlijin reopened a pull request:

https://github.com/apache/flink/pull/5883

[FLINK-9225][core] Minor code comments fix in RuntimeContext

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable / docs / 

[GitHub] flink pull request #5883: [FLINK-9225][core] Minor code comments fix in Runt...

2018-04-20 Thread binlijin
Github user binlijin closed the pull request at:

https://github.com/apache/flink/pull/5883


---


[jira] [Commented] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445399#comment-16445399
 ] 

ASF GitHub Bot commented on FLINK-9225:
---

Github user binlijin closed the pull request at:

https://github.com/apache/flink/pull/5883


> Minor code comments fix in RuntimeContext
> -
>
> Key: FLINK-9225
> URL: https://issues.apache.org/jira/browse/FLINK-9225
> Project: Flink
>  Issue Type: Bug
>Reporter: binlijin
>Priority: Trivial
> Attachments: FLINK-9225.patch
>
>
> * {@code
> * DataStream stream = ...;
> * KeyedStream keyedStream = stream.keyBy("id");
> *
> * keyedStream.map(new RichMapFunction>() {
> *
> *     private ValueState count;
> *
> *     public void open(Configuration cfg) {
> *         state = getRuntimeContext().getState(
> *                 new ValueStateDescriptor("count", 
> LongSerializer.INSTANCE, 0L));
> *     }
> *
> *     public Tuple2 map(MyType value) {
> *         long count = state.value() + 1;
> *         state.update(value);
> *         return new Tuple2<>(value, count);
> *     }
> * });
>  * }
>  
> "private ValueState count;"  should be "private ValueState state;"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445396#comment-16445396
 ] 

ASF GitHub Bot commented on FLINK-9225:
---

GitHub user binlijin opened a pull request:

https://github.com/apache/flink/pull/5883

[FLINK-9225][core] Minor code comments fix in RuntimeContext

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable / docs / 

[GitHub] flink pull request #5883: [FLINK-9225][core] Minor code comments fix in Runt...

2018-04-20 Thread binlijin
GitHub user binlijin opened a pull request:

https://github.com/apache/flink/pull/5883

[FLINK-9225][core] Minor code comments fix in RuntimeContext

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/binlijin/flink FLINK-9225

Alternatively you can review and apply these changes as the patch at:


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445393#comment-16445393
 ] 

mingleizhang commented on FLINK-9224:
-

[~twalthr] Should it put to {{run-nightly-tests.sh}} ?

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated FLINK-9225:

Attachment: FLINK-9225.patch

> Minor code comments fix in RuntimeContext
> -
>
> Key: FLINK-9225
> URL: https://issues.apache.org/jira/browse/FLINK-9225
> Project: Flink
>  Issue Type: Bug
>Reporter: binlijin
>Priority: Trivial
> Attachments: FLINK-9225.patch
>
>
> * {@code
> * DataStream stream = ...;
> * KeyedStream keyedStream = stream.keyBy("id");
> *
> * keyedStream.map(new RichMapFunction>() {
> *
> *     private ValueState count;
> *
> *     public void open(Configuration cfg) {
> *         state = getRuntimeContext().getState(
> *                 new ValueStateDescriptor("count", 
> LongSerializer.INSTANCE, 0L));
> *     }
> *
> *     public Tuple2 map(MyType value) {
> *         long count = state.value() + 1;
> *         state.update(value);
> *         return new Tuple2<>(value, count);
> *     }
> * });
>  * }
>  
> "private ValueState count;"  should be "private ValueState state;"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated FLINK-9225:

Attachment: (was: FLINK-9225.patch)

> Minor code comments fix in RuntimeContext
> -
>
> Key: FLINK-9225
> URL: https://issues.apache.org/jira/browse/FLINK-9225
> Project: Flink
>  Issue Type: Bug
>Reporter: binlijin
>Priority: Trivial
> Attachments: FLINK-9225.patch
>
>
> * {@code
> * DataStream stream = ...;
> * KeyedStream keyedStream = stream.keyBy("id");
> *
> * keyedStream.map(new RichMapFunction>() {
> *
> *     private ValueState count;
> *
> *     public void open(Configuration cfg) {
> *         state = getRuntimeContext().getState(
> *                 new ValueStateDescriptor("count", 
> LongSerializer.INSTANCE, 0L));
> *     }
> *
> *     public Tuple2 map(MyType value) {
> *         long count = state.value() + 1;
> *         state.update(value);
> *         return new Tuple2<>(value, count);
> *     }
> * });
>  * }
>  
> "private ValueState count;"  should be "private ValueState state;"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-9224.
---
Resolution: Won't Fix

[~mingleizhang] the pre-commit tests are meant for testing the basic 
functionality. They should contain only short running tests. The bucketing sink 
test kills task managers and might take up to 2 minutes. It does not qualify as 
a pre-commit test.

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated FLINK-9225:

Attachment: FLINK-9225.patch

> Minor code comments fix in RuntimeContext
> -
>
> Key: FLINK-9225
> URL: https://issues.apache.org/jira/browse/FLINK-9225
> Project: Flink
>  Issue Type: Bug
>Reporter: binlijin
>Priority: Trivial
> Attachments: FLINK-9225.patch
>
>
> * {@code
> * DataStream stream = ...;
> * KeyedStream keyedStream = stream.keyBy("id");
> *
> * keyedStream.map(new RichMapFunction>() {
> *
> *     private ValueState count;
> *
> *     public void open(Configuration cfg) {
> *         state = getRuntimeContext().getState(
> *                 new ValueStateDescriptor("count", 
> LongSerializer.INSTANCE, 0L));
> *     }
> *
> *     public Tuple2 map(MyType value) {
> *         long count = state.value() + 1;
> *         state.update(value);
> *         return new Tuple2<>(value, count);
> *     }
> * });
>  * }
>  
> "private ValueState count;"  should be "private ValueState state;"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9225) Minor code comments fix in RuntimeContext

2018-04-20 Thread binlijin (JIRA)
binlijin created FLINK-9225:
---

 Summary: Minor code comments fix in RuntimeContext
 Key: FLINK-9225
 URL: https://issues.apache.org/jira/browse/FLINK-9225
 Project: Flink
  Issue Type: Bug
Reporter: binlijin


* {@code

* DataStream stream = ...;

* KeyedStream keyedStream = stream.keyBy("id");

*

* keyedStream.map(new RichMapFunction>() {

*

*     private ValueState count;

*

*     public void open(Configuration cfg) {

*         state = getRuntimeContext().getState(

*                 new ValueStateDescriptor("count", 
LongSerializer.INSTANCE, 0L));

*     }

*

*     public Tuple2 map(MyType value) {

*         long count = state.value() + 1;

*         state.update(value);

*         return new Tuple2<>(value, count);

*     }

* });

 * }

 

"private ValueState count;"  should be "private ValueState state;"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445389#comment-16445389
 ] 

mingleizhang commented on FLINK-9224:
-

Hi, [~twalthr] Could you take a look on this jira ? Thank you.

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9224:
---

 Summary: Add Bucketing e2e test script to run-pre-commit-tests.sh
 Key: FLINK-9224
 URL: https://issues.apache.org/jira/browse/FLINK-9224
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


The {{test_streaming_bucketing.sh}} does not add to {{run-pre-commit-tests.sh}} 
now, the latter will be executed by Travis that would verify e2e test whether 
correct or incorrect. So, we should add it and make Travis execute it for every 
git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-3089) State API Should Support Data Expiration (State TTL)

2018-04-20 Thread Bowen Li (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445386#comment-16445386
 ] 

Bowen Li commented on FLINK-3089:
-

We came up with a completely new design. Take a look 
[here|https://docs.google.com/document/d/1PPwHMDHGWMOO8YGv08BNi8Fqe_h7SjeALRzmW-ZxSfY/edit?usp=sharing],
 feedbacks are welcome

> State API Should Support Data Expiration (State TTL)
> 
>
> Key: FLINK-3089
> URL: https://issues.apache.org/jira/browse/FLINK-3089
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API, State Backends, Checkpointing
>Reporter: Niels Basjes
>Assignee: Bowen Li
>Priority: Major
>
> In some usecases (webanalytics) there is a need to have a state per visitor 
> on a website (i.e. keyBy(sessionid) ).
> At some point the visitor simply leaves and no longer creates new events (so 
> a special 'end of session' event will not occur).
> The only way to determine that a visitor has left is by choosing a timeout, 
> like "After 30 minutes no events we consider the visitor 'gone'".
> Only after this (chosen) timeout has expired should we discard this state.
> In the Trigger part of Windows we can set a timer and close/discard this kind 
> of information. But that introduces the buffering effect of the window (which 
> in some scenarios is unwanted).
> What I would like is to be able to set a timeout on a specific state which I 
> can update afterwards.
> This makes it possible to create a map function that assigns the right value 
> and that discards the state automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-5758) Port-range for the web interface via YARN

2018-04-20 Thread Gary Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-5758:

Affects Version/s: 1.5.0

> Port-range for the web interface via YARN
> -
>
> Key: FLINK-5758
> URL: https://issues.apache.org/jira/browse/FLINK-5758
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Affects Versions: 1.2.0, 1.1.4, 1.3.0, 1.5.0
>Reporter: Kanstantsin Kamkou
>Assignee: Yelei Feng
>Priority: Major
>  Labels: network
>
> In case of YARN, the {{ConfigConstants.JOB_MANAGER_WEB_PORT_KEY}}   [is 
> changed to 
> 0|https://github.com/apache/flink/blob/release-1.2.0/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java#L526].
>  Please allow port ranges in this case. DevOps need that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9223) bufferConsumers should be closed in SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445369#comment-16445369
 ] 

ASF GitHub Bot commented on FLINK-9223:
---

GitHub user yanghua opened a pull request:

https://github.com/apache/flink/pull/5882

[FLINK-9223] bufferConsumers should be closed in 
SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd

## What is the purpose of the change

*This pull request closes buffer consumers in 
SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd after 
operation has done*


## Brief change log

  - *Call close method after operation on bufferConsumers is done*

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanghua/flink FLINK-9223

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5882.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5882


commit 6c4608857b5450493c24c1b77df0e6665dfc2725
Author: yanghua 
Date:   2018-04-20T06:11:45Z

[FLINK-9223] bufferConsumers should be closed in 
SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd




> bufferConsumers should be closed in 
> SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd
> -
>
> Key: FLINK-9223
> URL: https://issues.apache.org/jira/browse/FLINK-9223
> Project: Flink
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> {code}
> BufferConsumer[] bufferConsumers = Arrays.stream(bufferBuilders).map(
>   BufferBuilder::createBufferConsumer
> ).toArray(BufferConsumer[]::new);
> {code}
> After operation on bufferConsumers is done, the BufferConsumer's in the array 
> should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9223) bufferConsumers should be closed in SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd

2018-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445370#comment-16445370
 ] 

ASF GitHub Bot commented on FLINK-9223:
---

Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5882
  
cc @tillrohrmann 


> bufferConsumers should be closed in 
> SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd
> -
>
> Key: FLINK-9223
> URL: https://issues.apache.org/jira/browse/FLINK-9223
> Project: Flink
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Minor
>
> {code}
> BufferConsumer[] bufferConsumers = Arrays.stream(bufferBuilders).map(
>   BufferBuilder::createBufferConsumer
> ).toArray(BufferConsumer[]::new);
> {code}
> After operation on bufferConsumers is done, the BufferConsumer's in the array 
> should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5882: [FLINK-9223] bufferConsumers should be closed in Spillabl...

2018-04-20 Thread yanghua
Github user yanghua commented on the issue:

https://github.com/apache/flink/pull/5882
  
cc @tillrohrmann 


---


[GitHub] flink pull request #5882: [FLINK-9223] bufferConsumers should be closed in S...

2018-04-20 Thread yanghua
GitHub user yanghua opened a pull request:

https://github.com/apache/flink/pull/5882

[FLINK-9223] bufferConsumers should be closed in 
SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd

## What is the purpose of the change

*This pull request closes buffer consumers in 
SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd after 
operation has done*


## Brief change log

  - *Call close method after operation on bufferConsumers is done*

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanghua/flink FLINK-9223

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5882.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5882


commit 6c4608857b5450493c24c1b77df0e6665dfc2725
Author: yanghua 
Date:   2018-04-20T06:11:45Z

[FLINK-9223] bufferConsumers should be closed in 
SpillableSubpartitionTest#testConsumeSpilledPartitionSpilledBeforeAdd




---