[jira] [Commented] (FLINK-8322) support getting number of existing timers in TimerService

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324977#comment-16324977
 ] 

ASF GitHub Bot commented on FLINK-8322:
---

Github user bowenli86 closed the pull request at:

https://github.com/apache/flink/pull/5236


> support getting number of existing timers in TimerService
> -
>
> Key: FLINK-8322
> URL: https://issues.apache.org/jira/browse/FLINK-8322
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>
> There are pretty common use cases where users want to use timers as scheduled 
> threads - e.g. add a timer to wake up x hours later and do something (reap 
> old data usually) only if there's no existing timers, basically we only want 
> at most 1 timer exists for the key all the time



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5236: [FLINK-8322] support getting number of existing ti...

2018-01-12 Thread bowenli86
Github user bowenli86 closed the pull request at:

https://github.com/apache/flink/pull/5236


---


[jira] [Resolved] (FLINK-7049) TestingApplicationMaster keeps running after integration tests finish

2018-01-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved FLINK-7049.
---
Resolution: Cannot Reproduce

> TestingApplicationMaster keeps running after integration tests finish
> -
>
> Key: FLINK-7049
> URL: https://issues.apache.org/jira/browse/FLINK-7049
> Project: Flink
>  Issue Type: Test
>  Components: Tests, YARN
>Reporter: Ted Yu
>Priority: Minor
> Attachments: testingApplicationMaster.stack
>
>
> After integration tests finish, TestingApplicationMaster is still running.
> Toward the end of 
> flink-yarn-tests/target/flink-yarn-tests-ha/flink-yarn-tests-ha-logDir-nm-1_0/application_1498768839874_0001/container_1498768839874_0001_03_01/jobmanager.log
>  :
> {code}
> 2017-06-29 22:09:49,681 INFO  org.apache.zookeeper.ClientCnxn 
>   - Opening socket connection to server 127.0.0.1/127.0.0.1:46165
> 2017-06-29 22:09:49,681 ERROR 
> org.apache.flink.shaded.org.apache.curator.ConnectionState- 
> Authentication failed
> 2017-06-29 22:09:49,682 WARN  org.apache.zookeeper.ClientCnxn 
>   - Session 0x0 for server null, unexpected error, closing socket 
> connection and attempting reconnect
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> 2017-06-29 22:09:50,782 WARN  org.apache.zookeeper.ClientCnxn 
>   - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/jaas-3597644653611245612.conf'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it.
> 2017-06-29 22:09:50,782 INFO  org.apache.zookeeper.ClientCnxn 
>   - Opening socket connection to server 127.0.0.1/127.0.0.1:46165
> 2017-06-29 22:09:50,782 ERROR 
> org.apache.flink.shaded.org.apache.curator.ConnectionState- 
> Authentication failed
> 2017-06-29 22:09:50,783 WARN  org.apache.zookeeper.ClientCnxn 
>   - Session 0x0 for server null, unexpected error, closing socket 
> connection and attempting reconnect
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
>   at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8430) Implement stream-stream non-window full outer join

2018-01-12 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-8430:
--

 Summary: Implement stream-stream non-window full outer join
 Key: FLINK-8430
 URL: https://issues.apache.org/jira/browse/FLINK-8430
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Reporter: Hequn Cheng
Assignee: Hequn Cheng






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8429) Implement stream-stream non-window right outer join

2018-01-12 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-8429:
--

 Summary: Implement stream-stream non-window right outer join
 Key: FLINK-8429
 URL: https://issues.apache.org/jira/browse/FLINK-8429
 Project: Flink
  Issue Type: Sub-task
Reporter: Hequn Cheng
Assignee: Hequn Cheng






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8428) Implement stream-stream non-window left outer join

2018-01-12 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-8428:
--

 Summary: Implement stream-stream non-window left outer join
 Key: FLINK-8428
 URL: https://issues.apache.org/jira/browse/FLINK-8428
 Project: Flink
  Issue Type: Sub-task
Reporter: Hequn Cheng
Assignee: Hequn Cheng






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-5878) Add stream-stream non-window inner/outer join

2018-01-12 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-5878:
---
Summary: Add stream-stream non-window inner/outer join  (was: Add 
stream-stream inner/outer join)

> Add stream-stream non-window inner/outer join
> -
>
> Key: FLINK-5878
> URL: https://issues.apache.org/jira/browse/FLINK-5878
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Hequn Cheng
>
> This task is intended to support stream-stream inner/outer join on 
> tableAPI/SQL. A brief design doc about inner join is created: 
> https://docs.google.com/document/d/10oJDw-P9fImD5tc3Stwr7aGIhvem4l8uB2bjLzK72u4/edit
> We propose to use the mapState as the backend state interface for this "join" 
> operator, so this task requires FLINK-4856.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6094) Implement stream-stream non-window inner join

2018-01-12 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-6094:
---
Description: 
This includes:
1.Implement stream-stream non-window  inner join

  was:
This includes:
1.Implement stream-stream proctime non-window  inner join
2.Implement the retract process logic for join


> Implement stream-stream non-window  inner join
> --
>
> Key: FLINK-6094
> URL: https://issues.apache.org/jira/browse/FLINK-6094
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Hequn Cheng
> Fix For: 1.5.0
>
>
> This includes:
> 1.Implement stream-stream non-window  inner join



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-5878) Add stream-stream inner/outer join

2018-01-12 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-5878:
---
Description: 
This task is intended to support stream-stream inner/outer join on 
tableAPI/SQL. A brief design doc about inner join is created: 
https://docs.google.com/document/d/10oJDw-P9fImD5tc3Stwr7aGIhvem4l8uB2bjLzK72u4/edit

We propose to use the mapState as the backend state interface for this "join" 
operator, so this task requires FLINK-4856.

  was:
This task is intended to support stream-stream inner join on tableAPI. A brief 
design doc is created: 
https://docs.google.com/document/d/10oJDw-P9fImD5tc3Stwr7aGIhvem4l8uB2bjLzK72u4/edit

We propose to use the mapState as the backend state interface for this "join" 
operator, so this task requires FLINK-4856.


> Add stream-stream inner/outer join
> --
>
> Key: FLINK-5878
> URL: https://issues.apache.org/jira/browse/FLINK-5878
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Hequn Cheng
>
> This task is intended to support stream-stream inner/outer join on 
> tableAPI/SQL. A brief design doc about inner join is created: 
> https://docs.google.com/document/d/10oJDw-P9fImD5tc3Stwr7aGIhvem4l8uB2bjLzK72u4/edit
> We propose to use the mapState as the backend state interface for this "join" 
> operator, so this task requires FLINK-4856.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6094) Implement stream-stream non-window inner join

2018-01-12 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-6094:
---
Summary: Implement stream-stream non-window  inner join  (was: Implement 
stream-stream proctime non-window  inner join)

> Implement stream-stream non-window  inner join
> --
>
> Key: FLINK-6094
> URL: https://issues.apache.org/jira/browse/FLINK-6094
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Hequn Cheng
> Fix For: 1.5.0
>
>
> This includes:
> 1.Implement stream-stream proctime non-window  inner join
> 2.Implement the retract process logic for join



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-5878) Add stream-stream inner/outer join

2018-01-12 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-5878:
---
Summary: Add stream-stream inner/outer join  (was: Add stream-stream 
inner/left-out join)

> Add stream-stream inner/outer join
> --
>
> Key: FLINK-5878
> URL: https://issues.apache.org/jira/browse/FLINK-5878
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Hequn Cheng
>
> This task is intended to support stream-stream inner join on tableAPI. A 
> brief design doc is created: 
> https://docs.google.com/document/d/10oJDw-P9fImD5tc3Stwr7aGIhvem4l8uB2bjLzK72u4/edit
> We propose to use the mapState as the backend state interface for this "join" 
> operator, so this task requires FLINK-4856.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6094) Implement stream-stream proctime non-window inner join

2018-01-12 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-6094:
---
Issue Type: Sub-task  (was: New Feature)
Parent: FLINK-5878

> Implement stream-stream proctime non-window  inner join
> ---
>
> Key: FLINK-6094
> URL: https://issues.apache.org/jira/browse/FLINK-6094
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Hequn Cheng
> Fix For: 1.5.0
>
>
> This includes:
> 1.Implement stream-stream proctime non-window  inner join
> 2.Implement the retract process logic for join



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8421) HeapInternalTimerService should reconfigure compatible key / namespace serializers on restore

2018-01-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-8421:
---
Description: 
The {{HeapInternalTimerService}} still uses simple {{equals}} checks on 
restored / newly provided serializers for compatibility checks. This should be 
replaced with the {{TypeSerializer::ensureCompatibility}} checks instead, so 
that new serializers can be reconfigured.

This would entail that the {{TypeSerializerConfiguration}} of the key and 
namespace serializer in the {{HeapInternalTimerService}} also needs to be 
written to the raw state.

For Flink 1.4.0 release and current master, this is a critical bug since the 
{{KryoSerializer}} has different default base registrations than before due to 
FLINK-7420. i.e if the key of a window is serialized using the 
{{KryoSerializer}} in 1.3.x, the restore would never succeed in 1.4.0.

For 1.3.x, this fix would be an improvement, such that the 
{{HeapInternalTimerService}} restore will make use of serializer 
reconfiguration.

Other remarks:
* We need to double check all operators that checkpoint / restore from **raw** 
state. Apparently, the serializer compatibility checks were only implemented 
for managed state.
* Migration ITCases apparently do not have enough coverage. A migration test 
job that uses a key type which required the {{KryoSerializer}}, and uses 
windows, would have caught this issue.

  was:
The {{HeapInternalTimerService}} still uses simple {{equals}} checks on 
restored / newly provided serializers for compatibility checks. This should be 
replaced with the {{TypeSerializer::ensureCompatibility}} checks instead, so 
that new serializers can be reconfigured.

For Flink 1.4.0 release and current master, this is a critical bug since the 
{{KryoSerializer}} has different default base registrations than before due to 
FLINK-7420. i.e if the key of a window is serialized using the 
{{KryoSerializer}} in 1.3.x, the restore would never succeed in 1.4.0.

For 1.3.x, this fix would be an improvement, such that the 
{{HeapInternalTimerService}} restore will make use of serializer 
reconfiguration.

Other remarks:
* We need to double check all operators that checkpoint / restore from **raw** 
state. Apparently, the serializer compatibility checks were only implemented 
for managed state.
* Migration ITCases apparently do not have enough coverage. A migration test 
job that uses a key type which required the {{KryoSerializer}}, and uses 
windows, would have caught this issue.


> HeapInternalTimerService should reconfigure compatible key / namespace 
> serializers on restore
> -
>
> Key: FLINK-8421
> URL: https://issues.apache.org/jira/browse/FLINK-8421
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.3.3, 1.5.0, 1.4.1
>
>
> The {{HeapInternalTimerService}} still uses simple {{equals}} checks on 
> restored / newly provided serializers for compatibility checks. This should 
> be replaced with the {{TypeSerializer::ensureCompatibility}} checks instead, 
> so that new serializers can be reconfigured.
> This would entail that the {{TypeSerializerConfiguration}} of the key and 
> namespace serializer in the {{HeapInternalTimerService}} also needs to be 
> written to the raw state.
> For Flink 1.4.0 release and current master, this is a critical bug since the 
> {{KryoSerializer}} has different default base registrations than before due 
> to FLINK-7420. i.e if the key of a window is serialized using the 
> {{KryoSerializer}} in 1.3.x, the restore would never succeed in 1.4.0.
> For 1.3.x, this fix would be an improvement, such that the 
> {{HeapInternalTimerService}} restore will make use of serializer 
> reconfiguration.
> Other remarks:
> * We need to double check all operators that checkpoint / restore from 
> **raw** state. Apparently, the serializer compatibility checks were only 
> implemented for managed state.
> * Migration ITCases apparently do not have enough coverage. A migration test 
> job that uses a key type which required the {{KryoSerializer}}, and uses 
> windows, would have caught this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8340) Do not pass Configuration and configuration directory to CustomCommandLine methods

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8340.

Resolution: Fixed

Fixed via 30011b9b110aad0e1c28e7e0a025b73986781a72

> Do not pass Configuration and configuration directory to CustomCommandLine 
> methods
> --
>
> Key: FLINK-8340
> URL: https://issues.apache.org/jira/browse/FLINK-8340
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Currently, all methods in {{CustomCommandLine}} need a {{Configuration}} and 
> sometimes the configuration directory. Since these values should not change 
> over the lifetime of the {{CustomCommandLine}} we should pass them as a 
> constructor argument instead of a method argument.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8341) Remove unneeded CommandLineOptions

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8341.

Resolution: Fixed

Fixed via d7e9dc1931f1f1cedbfee12aebe34dd76e9bac10

> Remove unneeded CommandLineOptions
> --
>
> Key: FLINK-8341
> URL: https://issues.apache.org/jira/browse/FLINK-8341
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> With the refactorings of the {{CliFrontend}} we no longer have to keep the 
> JobManager address and commandLine in the {{CommandLineOptions}}. Therefore, 
> these fields should be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8349) Remove Yarn specific commands from YarnClusterClient

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8349.

Resolution: Fixed

Fixed via 402499f06a4b590ac47df64ecc01055c06b0399b

> Remove Yarn specific commands from YarnClusterClient
> 
>
> Key: FLINK-8349
> URL: https://issues.apache.org/jira/browse/FLINK-8349
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> The {{YarnClusterClient}} should no longer use Yarn specific commands. This 
> is necessary to make the {{FlinkYarnSessionCli}} work with other 
> {{ClusterClient}} implementations than the {{YarnClusterClient}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8342) Remove ClusterClient generic type parameter from ClusterDescriptor

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8342.

Resolution: Fixed

Fixed via 10e900b25ac03876d3f9e78f260d48efe6b9d853

> Remove ClusterClient generic type parameter from ClusterDescriptor
> --
>
> Key: FLINK-8342
> URL: https://issues.apache.org/jira/browse/FLINK-8342
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> The {{ClusterDescriptor}} should not specialize the returned 
> {{ClusterClient}} type in order to develop code which can work with all 
> {{ClusterDescriptors}} and {{ClusterClients}}. Therefore, I propose to remove 
> the generic type parameter from {{ClusterDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8347) Make Cluster id typesafe

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8347.

Resolution: Fixed

Fixed via 38d3720863c6187153174d0df57fc414b0cf8e96

> Make Cluster id typesafe
> 
>
> Key: FLINK-8347
> URL: https://issues.apache.org/jira/browse/FLINK-8347
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Currently, the cluster id is of type {{String}}. We should make the id 
> typesafe to avoid mixups between different {{CustomCommandLines}} and 
> {{ClusterDescriptors}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8420) Timeout exceptions are not properly recognized by RetryingRegistration

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324647#comment-16324647
 ] 

ASF GitHub Bot commented on FLINK-8420:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5286


> Timeout exceptions are not properly recognized by RetryingRegistration
> --
>
> Key: FLINK-8420
> URL: https://issues.apache.org/jira/browse/FLINK-8420
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> The {{RetryingRegistration}} does not correctly respond to 
> {{TimeoutExceptions}} and instead treats them like errors. This causes that 
> it waits for the delay on error instead of backing exponentially off.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-8119) Cannot submit jobs to YARN Session in FLIP-6 mode

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-8119.
--
Resolution: Fixed

Fixed via dbe0e8286d76a5facdb49589b638b87dbde80178

> Cannot submit jobs to YARN Session in FLIP-6 mode
> -
>
> Key: FLINK-8119
> URL: https://issues.apache.org/jira/browse/FLINK-8119
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Till Rohrmann
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Cannot submit jobs to YARN Session in FLIP-6 mode because 
> {{FlinkYarnSessionCli}} becomes the _active_ CLI (should be 
> {{Flip6DefaultCLI}}).
> *Steps to reproduce*
> # Build Flink 1.5 {{101fef7397128b0aba23221481ab86f62b18118f}}
> # {{bin/yarn-session.sh -flip6 -d -n 1 -s 1 -jm 1024 -tm 1024}}
> # {{bin/flink run -flip6 ./examples/streaming/WordCount.jar}}
> # Verify that the job will not run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8347) Make Cluster id typesafe

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324650#comment-16324650
 ] 

ASF GitHub Bot commented on FLINK-8347:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5232


> Make Cluster id typesafe
> 
>
> Key: FLINK-8347
> URL: https://issues.apache.org/jira/browse/FLINK-8347
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Currently, the cluster id is of type {{String}}. We should make the id 
> typesafe to avoid mixups between different {{CustomCommandLines}} and 
> {{ClusterDescriptors}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8119) Cannot submit jobs to YARN Session in FLIP-6 mode

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324653#comment-16324653
 ] 

ASF GitHub Bot commented on FLINK-8119:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5234


> Cannot submit jobs to YARN Session in FLIP-6 mode
> -
>
> Key: FLINK-8119
> URL: https://issues.apache.org/jira/browse/FLINK-8119
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Till Rohrmann
>Priority: Blocker
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Cannot submit jobs to YARN Session in FLIP-6 mode because 
> {{FlinkYarnSessionCli}} becomes the _active_ CLI (should be 
> {{Flip6DefaultCLI}}).
> *Steps to reproduce*
> # Build Flink 1.5 {{101fef7397128b0aba23221481ab86f62b18118f}}
> # {{bin/yarn-session.sh -flip6 -d -n 1 -s 1 -jm 1024 -tm 1024}}
> # {{bin/flink run -flip6 ./examples/streaming/WordCount.jar}}
> # Verify that the job will not run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8348) Print help for DefaultCLI

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324652#comment-16324652
 ] 

ASF GitHub Bot commented on FLINK-8348:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5233


> Print help for DefaultCLI
> -
>
> Key: FLINK-8348
> URL: https://issues.apache.org/jira/browse/FLINK-8348
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Print help for the {{DefaultCLI}} when calling {{flink -help}}, for example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8341) Remove unneeded CommandLineOptions

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324648#comment-16324648
 ] 

ASF GitHub Bot commented on FLINK-8341:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5227


> Remove unneeded CommandLineOptions
> --
>
> Key: FLINK-8341
> URL: https://issues.apache.org/jira/browse/FLINK-8341
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> With the refactorings of the {{CliFrontend}} we no longer have to keep the 
> JobManager address and commandLine in the {{CommandLineOptions}}. Therefore, 
> these fields should be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8349) Remove Yarn specific commands from YarnClusterClient

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324656#comment-16324656
 ] 

ASF GitHub Bot commented on FLINK-8349:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5229


> Remove Yarn specific commands from YarnClusterClient
> 
>
> Key: FLINK-8349
> URL: https://issues.apache.org/jira/browse/FLINK-8349
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> The {{YarnClusterClient}} should no longer use Yarn specific commands. This 
> is necessary to make the {{FlinkYarnSessionCli}} work with other 
> {{ClusterClient}} implementations than the {{YarnClusterClient}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8342) Remove ClusterClient generic type parameter from ClusterDescriptor

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324649#comment-16324649
 ] 

ASF GitHub Bot commented on FLINK-8342:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5228


> Remove ClusterClient generic type parameter from ClusterDescriptor
> --
>
> Key: FLINK-8342
> URL: https://issues.apache.org/jira/browse/FLINK-8342
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> The {{ClusterDescriptor}} should not specialize the returned 
> {{ClusterClient}} type in order to develop code which can work with all 
> {{ClusterDescriptors}} and {{ClusterClients}}. Therefore, I propose to remove 
> the generic type parameter from {{ClusterDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8201) YarnResourceManagerTest causes license checking failure

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324654#comment-16324654
 ] 

ASF GitHub Bot commented on FLINK-8201:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5123


> YarnResourceManagerTest causes license checking failure
> ---
>
> Key: FLINK-8201
> URL: https://issues.apache.org/jira/browse/FLINK-8201
> Project: Flink
>  Issue Type: Bug
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
> Fix For: 1.5.0
>
>
> YarnResourceManagerTest generates a temporary taskmanager config file in 
> flink-yarn module 
>  root folder and never clear it, which makes license checking fail when we 
> run {{mvn clean verify}} multiple times in the same source folder.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8348) Print help for DefaultCLI

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8348.

Resolution: Feedback Received

Fixed via 7d986ce0482562efd90fec416c4b2f4638a4058a

> Print help for DefaultCLI
> -
>
> Key: FLINK-8348
> URL: https://issues.apache.org/jira/browse/FLINK-8348
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Print help for the {{DefaultCLI}} when calling {{flink -help}}, for example.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8340) Do not pass Configuration and configuration directory to CustomCommandLine methods

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324655#comment-16324655
 ] 

ASF GitHub Bot commented on FLINK-8340:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5226


> Do not pass Configuration and configuration directory to CustomCommandLine 
> methods
> --
>
> Key: FLINK-8340
> URL: https://issues.apache.org/jira/browse/FLINK-8340
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Currently, all methods in {{CustomCommandLine}} need a {{Configuration}} and 
> sometimes the configuration directory. Since these values should not change 
> over the lifetime of the {{CustomCommandLine}} we should pass them as a 
> constructor argument instead of a method argument.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8288) Register the web interface url to yarn for yarn job mode

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324651#comment-16324651
 ] 

ASF GitHub Bot commented on FLINK-8288:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5186


> Register the web interface url to yarn for yarn job mode
> 
>
> Key: FLINK-8288
> URL: https://issues.apache.org/jira/browse/FLINK-8288
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> For flip-6 job mode, the resource manager is created before the web monitor, 
> so the web interface url is not set to resource manager, and the resource 
> manager can not register the url to yarn.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5227: [FLINK-8341] [flip6] Remove not needed options fro...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5227


---


[GitHub] flink pull request #5229: [FLINK-8349] [flip6] Remove Yarn specific commands...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5229


---


[GitHub] flink pull request #5233: [FLINK-8348] [flip6] Print help for DefaultCLI

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5233


---


[GitHub] flink pull request #5228: [FLINK-8342] [flip6] Remove generic type parameter...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5228


---


[GitHub] flink pull request #5226: [FLINK-8340] [flip6] Remove passing of Configurati...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5226


---


[jira] [Closed] (FLINK-8420) Timeout exceptions are not properly recognized by RetryingRegistration

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-8420.

Resolution: Fixed

Fixed via 3c99ae8959f69325bb9b7d810b41c60e42e602c5

> Timeout exceptions are not properly recognized by RetryingRegistration
> --
>
> Key: FLINK-8420
> URL: https://issues.apache.org/jira/browse/FLINK-8420
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> The {{RetryingRegistration}} does not correctly respond to 
> {{TimeoutExceptions}} and instead treats them like errors. This causes that 
> it waits for the delay on error instead of backing exponentially off.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-8288) Register the web interface url to yarn for yarn job mode

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-8288.
--
   Resolution: Fixed
Fix Version/s: 1.5.0

Fixed via 3b01686851e8281642924c4a620fb43b008de174

> Register the web interface url to yarn for yarn job mode
> 
>
> Key: FLINK-8288
> URL: https://issues.apache.org/jira/browse/FLINK-8288
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> For flip-6 job mode, the resource manager is created before the web monitor, 
> so the web interface url is not set to resource manager, and the resource 
> manager can not register the url to yarn.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5186: [FLINK-8288] [runtime] register job master rest en...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5186


---


[GitHub] flink pull request #5234: [FLINK-8119] [flip6] Wire correct Flip6 components...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5234


---


[GitHub] flink pull request #5286: [FLINK-8420] [flip6] Recognize TimeoutException in...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5286


---


[jira] [Resolved] (FLINK-8201) YarnResourceManagerTest causes license checking failure

2018-01-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-8201.
--
   Resolution: Fixed
Fix Version/s: 1.5.0

Fixed via d0bc300087b037b156425ccd509147177dd9529e

> YarnResourceManagerTest causes license checking failure
> ---
>
> Key: FLINK-8201
> URL: https://issues.apache.org/jira/browse/FLINK-8201
> Project: Flink
>  Issue Type: Bug
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
> Fix For: 1.5.0
>
>
> YarnResourceManagerTest generates a temporary taskmanager config file in 
> flink-yarn module 
>  root folder and never clear it, which makes license checking fail when we 
> run {{mvn clean verify}} multiple times in the same source folder.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5232: [FLINK-8347] [flip6] Make cluster id used by Clust...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5232


---


[GitHub] flink pull request #5123: [FLINK-8201] YarnResourceManagerTest causes licens...

2018-01-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5123


---


[jira] [Created] (FLINK-8427) Checkstyle for org.apache.flink.optimizer.costs

2018-01-12 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-8427:
-

 Summary: Checkstyle for org.apache.flink.optimizer.costs
 Key: FLINK-8427
 URL: https://issues.apache.org/jira/browse/FLINK-8427
 Project: Flink
  Issue Type: Improvement
  Components: Optimizer
Affects Versions: 1.5.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration

2018-01-12 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324542#comment-16324542
 ] 

Greg Hogan commented on FLINK-8414:
---

It is incumbent on the user to configure an appropriate parallelism for the 
quantity of data. Those graphs contain only a few tens of megabytes of data so 
it is not surprising that the optimal parallelism is around (or even lower 
than) 16. You can use `VertexMetrics` to pre-compute the size of the graph and 
adjust the parallelism at runtime (`ExecutionConfig#setParallelism`). Flink and 
Gelly are designed to scale to 100s to 1000s of parallel tasks and GBs to TBs 
of data.

> Gelly performance seriously decreases when using the suggested parallelism 
> configuration
> 
>
> Key: FLINK-8414
> URL: https://issues.apache.org/jira/browse/FLINK-8414
> Project: Flink
>  Issue Type: Bug
>  Components: Configuration, Documentation, Gelly
>Reporter: flora karniav
>Priority: Minor
>
> I am running Gelly examples with different datasets in a cluster of 5 
> machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each.
> The number of Slots parameter is set to 32 (as suggested) and the parallelism 
> to 128 (32 cores*4 taskmanagers).
> I observe a vast performance degradation using these suggested settings than 
> setting parallelism.default to 16 for example were the same job completes at 
> ~60 seconds vs ~140 in the 128 parallelism case.
> Is there something wrong in my configuration? Should I decrease parallelism 
> and -if so- will this inevitably decrease CPU utilization?
> Another matter that may be related to this is the number of partitions of the 
> data. Is this somehow related to parallelism? How many partitions are created 
> in the case of parallelism.default=128? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8426) Error in Generating Timestamp/Watermakr doc

2018-01-12 Thread Christophe Jolif (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christophe Jolif updated FLINK-8426:

Description: 
In 
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html


{{public class BoundedOutOfOrdernessGenerator extends 
AssignerWithPeriodicWatermarks}}

should be
{{public class BoundedOutOfOrdernessGenerator implements 
AssignerWithPeriodicWatermarks}}

  was:
In 
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html

{{
public class BoundedOutOfOrdernessGenerator extends 
AssignerWithPeriodicWatermarks}}

should be
{{
public class BoundedOutOfOrdernessGenerator implements 
AssignerWithPeriodicWatermarks}}


> Error in Generating Timestamp/Watermakr doc
> ---
>
> Key: FLINK-8426
> URL: https://issues.apache.org/jira/browse/FLINK-8426
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Christophe Jolif
>Priority: Trivial
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html
> {{public class BoundedOutOfOrdernessGenerator extends 
> AssignerWithPeriodicWatermarks}}
> should be
> {{public class BoundedOutOfOrdernessGenerator implements 
> AssignerWithPeriodicWatermarks}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8426) Error in Generating Timestamp/Watermakr doc

2018-01-12 Thread Christophe Jolif (JIRA)
Christophe Jolif created FLINK-8426:
---

 Summary: Error in Generating Timestamp/Watermakr doc
 Key: FLINK-8426
 URL: https://issues.apache.org/jira/browse/FLINK-8426
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.4.0
Reporter: Christophe Jolif
Priority: Trivial


In 
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/event_timestamps_watermarks.html

{{
public class BoundedOutOfOrdernessGenerator extends 
AssignerWithPeriodicWatermarks}}

should be
{{
public class BoundedOutOfOrdernessGenerator implements 
AssignerWithPeriodicWatermarks}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8422) Checkstyle for org.apache.flink.api.java.tuple

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324528#comment-16324528
 ] 

ASF GitHub Bot commented on FLINK-8422:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/5292

[FLINK-8422] [core] Checkstyle for org.apache.flink.api.java.tuple

## What is the purpose of the change

Update TupleGenerator for Flink's checkstyle and rebuild Tuple and 
TupleBuilder classes.

## Brief change log

`TupleGenerator` has been updated to write Flink-checkstyle compliant code.

The following non-generated files were manually updated: `Tuple`, `Tuple0`, 
`Tuple0Builder`, `Tuple2Test`

`org.apache.flink.api.java.tuple` was removed from the checkstyle 
suppressions for `flink-core`.

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage. 
Re-running `TupleGenerator` should replicate the newly updated files.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
8422_checkstyle_for_org_apache_flink_api_java_tuple

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5292


commit b2fa893e20ed09b3898c64f6821f2d177369f0a7
Author: Greg Hogan 
Date:   2018-01-12T16:35:15Z

[FLINK-8422] [core] Checkstyle for org.apache.flink.api.java.tuple

Update TupleGenerator for Flink's checkstyle and rebuild Tuple and
TupleBuilder classes.




> Checkstyle for org.apache.flink.api.java.tuple
> --
>
> Key: FLINK-8422
> URL: https://issues.apache.org/jira/browse/FLINK-8422
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.5.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
>
> Update {{TupleGenerator}} for Flink's checkstyle and rebuild {{Tuple}} and 
> {{TupleBuilder}} classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5292: [FLINK-8422] [core] Checkstyle for org.apache.flin...

2018-01-12 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/5292

[FLINK-8422] [core] Checkstyle for org.apache.flink.api.java.tuple

## What is the purpose of the change

Update TupleGenerator for Flink's checkstyle and rebuild Tuple and 
TupleBuilder classes.

## Brief change log

`TupleGenerator` has been updated to write Flink-checkstyle compliant code.

The following non-generated files were manually updated: `Tuple`, `Tuple0`, 
`Tuple0Builder`, `Tuple2Test`

`org.apache.flink.api.java.tuple` was removed from the checkstyle 
suppressions for `flink-core`.

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage. 
Re-running `TupleGenerator` should replicate the newly updated files.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
8422_checkstyle_for_org_apache_flink_api_java_tuple

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5292


commit b2fa893e20ed09b3898c64f6821f2d177369f0a7
Author: Greg Hogan 
Date:   2018-01-12T16:35:15Z

[FLINK-8422] [core] Checkstyle for org.apache.flink.api.java.tuple

Update TupleGenerator for Flink's checkstyle and rebuild Tuple and
TupleBuilder classes.




---


[jira] [Commented] (FLINK-8361) Remove create_release_files.sh

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324525#comment-16324525
 ] 

ASF GitHub Bot commented on FLINK-8361:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/5291

[FLINK-8361] [build] Remove create_release_files.sh

## What is the purpose of the change

The monolithic create_release_files.sh does not support building without 
Hadoop and has been superseded by the scripts in tools/releasing.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
8361_remove_create_release_files_sh

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5291


commit d9af86339ba592161419be25b6046ccfa4b47b2f
Author: Greg Hogan 
Date:   2018-01-12T15:44:13Z

[FLINK-8361] [build] Remove create_release_files.sh

The monolithic create_release_files.sh does not support building without
Hadoop and has been superseded by the scripts in tools/releasing.




> Remove create_release_files.sh
> --
>
> Key: FLINK-8361
> URL: https://issues.apache.org/jira/browse/FLINK-8361
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.5.0
>Reporter: Greg Hogan
>Priority: Trivial
>
> The monolithic {{create_release_files.sh}} does not support building Flink 
> without Hadoop and looks to have been superseded by the scripts in 
> {{tools/releasing}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5291: [FLINK-8361] [build] Remove create_release_files.s...

2018-01-12 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/5291

[FLINK-8361] [build] Remove create_release_files.sh

## What is the purpose of the change

The monolithic create_release_files.sh does not support building without 
Hadoop and has been superseded by the scripts in tools/releasing.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
8361_remove_create_release_files_sh

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5291


commit d9af86339ba592161419be25b6046ccfa4b47b2f
Author: Greg Hogan 
Date:   2018-01-12T15:44:13Z

[FLINK-8361] [build] Remove create_release_files.sh

The monolithic create_release_files.sh does not support building without
Hadoop and has been superseded by the scripts in tools/releasing.




---


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324469#comment-16324469
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161248402
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultSubpartitionView.java
 ---
@@ -52,4 +52,9 @@
boolean isReleased();
 
Throwable getFailureCause();
+
+   /**
+* Returns whether the next buffer is event or not.
--- End diff --

`is an event`


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324471#comment-16324471
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161296711
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ---
@@ -73,6 +73,9 @@
/** Flag indicating whether all resources have been released. */
private AtomicBoolean isReleased = new AtomicBoolean();
 
+   /** The next buffer to hand out. */
+   private Buffer nextBuffer;
--- End diff --

We need to protect this against race conditions with respect to 
`releaseAllResources()` as well.

Actually, I'm surprised that nothing in this class is protected against 
concurrently releasing it. Although I have created a separate issue for this 
([FLINK-8425](https://issues.apache.org/jira/browse/FLINK-8425)), I think we 
may need to solve this here for the new `nextBuffer` anyway.


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324467#comment-16324467
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161295951
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpillableSubpartitionView.java
 ---
@@ -199,6 +199,19 @@ public boolean isReleased() {
}
}
 
+   @Override
+   public boolean nextBufferIsEvent() {
+   if (nextBuffer != null) {
+   return !nextBuffer.isBuffer();
+   }
+
+   if (spilledView != null) {
--- End diff --

`checkState(spilledView != null, "No in-memory buffers available, but also 
nothing spilled.");` just like in `getNextBuffer()`?


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324472#comment-16324472
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161249082
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpillableSubpartitionView.java
 ---
@@ -199,6 +199,19 @@ public boolean isReleased() {
}
}
 
+   @Override
+   public boolean nextBufferIsEvent() {
+   if (nextBuffer != null) {
+   return !nextBuffer.isBuffer();
+   }
--- End diff --

We probably need synchronization here to access `nextBuffer` and also check 
for `isReleased()` similar to `getNextBuffer`, since we are basically doing the 
same but without taking the buffer.

Why not integrate this into `getNextBuffer` and the `BufferAndBacklog` 
returned there? Inside that method, gathering this additional info is basically 
for free and we may thus speed up some code paths.


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324466#comment-16324466
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161295428
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ---
@@ -163,6 +190,15 @@ public boolean isReleased() {
return parent.isReleased() || isReleased.get();
}
 
+   @Override
+   public boolean nextBufferIsEvent() {
+   if (nextBuffer != null) {
+   return !nextBuffer.isBuffer();
+   }
+
+   return false;
--- End diff --

I'm afraid relying on `nextBuffer` won't be enough because it might not 
have been there during `getNextBuffer()` but may be there now.

Please add a test for this case.


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324465#comment-16324465
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161247589
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -77,6 +92,46 @@ void requestSubpartitionView(
}
}
 
+   /**
+* The credits from consumer are added in incremental way.
+*
+* @param creditDeltas The credit deltas
+*/
+   public void addCredit(int creditDeltas) {
+   numCreditsAvailable += creditDeltas;
+   }
+
+   /**
+* Updates the value to indicate whether the reader is enqueued in the 
pipeline or not.
+*
+* @param isRegisteredAvailable True if this reader is already enqueued 
in the pipeline.
+*/
+   public void notifyAvailabilityChanged(boolean isRegisteredAvailable) {
+   this.isRegisteredAvailable = isRegisteredAvailable;
+   }
+
+   public boolean isRegisteredAvailable() {
+   return isRegisteredAvailable;
+   }
+
+   /**
+* Check whether this reader is available or not.
+*
+* Return true only if the next buffer is event or the reader has 
both available
+* credits and buffers.
+*/
+   public boolean isAvailable() {
+   if (numBuffersAvailable.get() <= 0) {
+   return false;
+   }
+
+   if (subpartitionView.nextBufferIsEvent() || numCreditsAvailable 
> 0) {
--- End diff --

how about checking `numCreditsAvailable` first since that's the cheaper 
check?


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324473#comment-16324473
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161287607
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -77,6 +92,46 @@ void requestSubpartitionView(
}
}
 
+   /**
+* The credits from consumer are added in incremental way.
+*
+* @param creditDeltas The credit deltas
+*/
+   public void addCredit(int creditDeltas) {
+   numCreditsAvailable += creditDeltas;
+   }
+
+   /**
+* Updates the value to indicate whether the reader is enqueued in the 
pipeline or not.
+*
+* @param isRegisteredAvailable True if this reader is already enqueued 
in the pipeline.
+*/
+   public void notifyAvailabilityChanged(boolean isRegisteredAvailable) {
+   this.isRegisteredAvailable = isRegisteredAvailable;
+   }
+
+   public boolean isRegisteredAvailable() {
+   return isRegisteredAvailable;
+   }
+
+   /**
+* Check whether this reader is available or not.
+*
+* Return true only if the next buffer is event or the reader has 
both available
+* credits and buffers.
+*/
+   public boolean isAvailable() {
+   if (numBuffersAvailable.get() <= 0) {
+   return false;
+   }
+
+   if (subpartitionView.nextBufferIsEvent() || numCreditsAvailable 
> 0) {
--- End diff --

one more thing: in the special case of using it inside `getNextBuffer()`, 
we already retrieved the `remaining` number of buffers so we can spare one 
lookup into an atomic integer here by passing this one in


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324470#comment-16324470
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161282684
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -77,6 +92,46 @@ void requestSubpartitionView(
}
}
 
+   /**
+* The credits from consumer are added in incremental way.
+*
+* @param creditDeltas The credit deltas
+*/
+   public void addCredit(int creditDeltas) {
+   numCreditsAvailable += creditDeltas;
+   }
+
+   /**
+* Updates the value to indicate whether the reader is enqueued in the 
pipeline or not.
+*
+* @param isRegisteredAvailable True if this reader is already enqueued 
in the pipeline.
+*/
+   public void notifyAvailabilityChanged(boolean isRegisteredAvailable) {
+   this.isRegisteredAvailable = isRegisteredAvailable;
+   }
+
+   public boolean isRegisteredAvailable() {
+   return isRegisteredAvailable;
+   }
+
+   /**
+* Check whether this reader is available or not.
+*
+* Return true only if the next buffer is event or the reader has 
both available
+* credits and buffers.
+*/
+   public boolean isAvailable() {
+   if (numBuffersAvailable.get() <= 0) {
+   return false;
+   }
+
+   if (subpartitionView.nextBufferIsEvent() || numCreditsAvailable 
> 0) {
--- End diff --

also, why not simply do the following for this whole method?
```
return numBuffersAvailable.get() > 0 &&
(numCreditsAvailable > 0 || 
subpartitionView.nextBufferIsEvent());
```


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324464#comment-16324464
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161241148
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -49,10 +50,24 @@
 
private volatile ResultSubpartitionView subpartitionView;
 
+   /**
+* The status indicating whether this reader is already enqueued in the 
pipeline for transferring
+* data or not. It is mainly used for avoid registering this reader to 
the pipeline repeatedly.
+*/
+   private boolean isRegisteredAvailable;
+
+   /** The number of available buffers for holding data on the consumer 
side. */
+   private int numCreditsAvailable;
--- End diff --

Just a note since I was wondering whether we need synchronization here (not 
needed after verifying the things below):

1) `numCreditsAvailable` is increased via 
`PartitionRequestServerHandler#channelRead0` which is a separate channel 
handler than `PartitionRequestQueue` (see 
`NettyProtocol#getServerChannelHandlers`). According to [Netty's thread 
model](https://netty.io/wiki/new-and-noteworthy-in-4.0.html#wiki-h2-34), we 
should be safe though:

> A user can specify an EventExecutor when he or she adds a handler to a 
ChannelPipeline.
> - If specified, the handler methods of the ChannelHandler are always 
invoked by the specified EventExecutor.
> - If unspecified, the handler methods are always invoked by the EventLoop 
that its associated Channel is registered to.

2) `numCreditsAvailable` is read from 
`PartitionRequestQueue#enqueueAvailableReader()` and 
`SequenceNumberingViewReader#getNextBuffer()` which are both accessed by the 
channel's IO thread only.


> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7456) Implement Netty sender incoming pipeline for credit-based

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324468#comment-16324468
 ] 

ASF GitHub Bot commented on FLINK-7456:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161242036
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -49,10 +50,24 @@
 
private volatile ResultSubpartitionView subpartitionView;
 
+   /**
+* The status indicating whether this reader is already enqueued in the 
pipeline for transferring
+* data or not. It is mainly used for avoid registering this reader to 
the pipeline repeatedly.
+*/
+   private boolean isRegisteredAvailable;
--- End diff --



I know it's default-initialized to false but let's make this explicit



> Implement Netty sender incoming pipeline for credit-based
> -
>
> Key: FLINK-7456
> URL: https://issues.apache.org/jira/browse/FLINK-7456
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: zhijiang
>Assignee: zhijiang
> Fix For: 1.5.0
>
>
> This is a part of work for credit-based network flow control.
> On sender side, each subpartition view maintains an atomic integer 
> {{currentCredit}} from receiver. Once receiving the messages of 
> {{PartitionRequest}} and {{AddCredit}}, the {{currentCredit}} is added by 
> deltas.
> Each view also maintains an atomic boolean field to mark it as registered 
> available for transfer to make sure it is enqueued in handler only once. If 
> the {{currentCredit}} increases from zero and there are available buffers in 
> the subpartition, the corresponding view will be enqueued for transferring 
> data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161248402
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultSubpartitionView.java
 ---
@@ -52,4 +52,9 @@
boolean isReleased();
 
Throwable getFailureCause();
+
+   /**
+* Returns whether the next buffer is event or not.
--- End diff --

`is an event`


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161247589
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -77,6 +92,46 @@ void requestSubpartitionView(
}
}
 
+   /**
+* The credits from consumer are added in incremental way.
+*
+* @param creditDeltas The credit deltas
+*/
+   public void addCredit(int creditDeltas) {
+   numCreditsAvailable += creditDeltas;
+   }
+
+   /**
+* Updates the value to indicate whether the reader is enqueued in the 
pipeline or not.
+*
+* @param isRegisteredAvailable True if this reader is already enqueued 
in the pipeline.
+*/
+   public void notifyAvailabilityChanged(boolean isRegisteredAvailable) {
+   this.isRegisteredAvailable = isRegisteredAvailable;
+   }
+
+   public boolean isRegisteredAvailable() {
+   return isRegisteredAvailable;
+   }
+
+   /**
+* Check whether this reader is available or not.
+*
+* Return true only if the next buffer is event or the reader has 
both available
+* credits and buffers.
+*/
+   public boolean isAvailable() {
+   if (numBuffersAvailable.get() <= 0) {
+   return false;
+   }
+
+   if (subpartitionView.nextBufferIsEvent() || numCreditsAvailable 
> 0) {
--- End diff --

how about checking `numCreditsAvailable` first since that's the cheaper 
check?


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161241148
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -49,10 +50,24 @@
 
private volatile ResultSubpartitionView subpartitionView;
 
+   /**
+* The status indicating whether this reader is already enqueued in the 
pipeline for transferring
+* data or not. It is mainly used for avoid registering this reader to 
the pipeline repeatedly.
+*/
+   private boolean isRegisteredAvailable;
+
+   /** The number of available buffers for holding data on the consumer 
side. */
+   private int numCreditsAvailable;
--- End diff --

Just a note since I was wondering whether we need synchronization here (not 
needed after verifying the things below):

1) `numCreditsAvailable` is increased via 
`PartitionRequestServerHandler#channelRead0` which is a separate channel 
handler than `PartitionRequestQueue` (see 
`NettyProtocol#getServerChannelHandlers`). According to [Netty's thread 
model](https://netty.io/wiki/new-and-noteworthy-in-4.0.html#wiki-h2-34), we 
should be safe though:

> A user can specify an EventExecutor when he or she adds a handler to a 
ChannelPipeline.
> - If specified, the handler methods of the ChannelHandler are always 
invoked by the specified EventExecutor.
> - If unspecified, the handler methods are always invoked by the EventLoop 
that its associated Channel is registered to.

2) `numCreditsAvailable` is read from 
`PartitionRequestQueue#enqueueAvailableReader()` and 
`SequenceNumberingViewReader#getNextBuffer()` which are both accessed by the 
channel's IO thread only.


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161296711
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ---
@@ -73,6 +73,9 @@
/** Flag indicating whether all resources have been released. */
private AtomicBoolean isReleased = new AtomicBoolean();
 
+   /** The next buffer to hand out. */
+   private Buffer nextBuffer;
--- End diff --

We need to protect this against race conditions with respect to 
`releaseAllResources()` as well.

Actually, I'm surprised that nothing in this class is protected against 
concurrently releasing it. Although I have created a separate issue for this 
([FLINK-8425](https://issues.apache.org/jira/browse/FLINK-8425)), I think we 
may need to solve this here for the new `nextBuffer` anyway.


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161282684
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -77,6 +92,46 @@ void requestSubpartitionView(
}
}
 
+   /**
+* The credits from consumer are added in incremental way.
+*
+* @param creditDeltas The credit deltas
+*/
+   public void addCredit(int creditDeltas) {
+   numCreditsAvailable += creditDeltas;
+   }
+
+   /**
+* Updates the value to indicate whether the reader is enqueued in the 
pipeline or not.
+*
+* @param isRegisteredAvailable True if this reader is already enqueued 
in the pipeline.
+*/
+   public void notifyAvailabilityChanged(boolean isRegisteredAvailable) {
+   this.isRegisteredAvailable = isRegisteredAvailable;
+   }
+
+   public boolean isRegisteredAvailable() {
+   return isRegisteredAvailable;
+   }
+
+   /**
+* Check whether this reader is available or not.
+*
+* Return true only if the next buffer is event or the reader has 
both available
+* credits and buffers.
+*/
+   public boolean isAvailable() {
+   if (numBuffersAvailable.get() <= 0) {
+   return false;
+   }
+
+   if (subpartitionView.nextBufferIsEvent() || numCreditsAvailable 
> 0) {
--- End diff --

also, why not simply do the following for this whole method?
```
return numBuffersAvailable.get() > 0 &&
(numCreditsAvailable > 0 || 
subpartitionView.nextBufferIsEvent());
```


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161249082
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpillableSubpartitionView.java
 ---
@@ -199,6 +199,19 @@ public boolean isReleased() {
}
}
 
+   @Override
+   public boolean nextBufferIsEvent() {
+   if (nextBuffer != null) {
+   return !nextBuffer.isBuffer();
+   }
--- End diff --

We probably need synchronization here to access `nextBuffer` and also check 
for `isReleased()` similar to `getNextBuffer`, since we are basically doing the 
same but without taking the buffer.

Why not integrate this into `getNextBuffer` and the `BufferAndBacklog` 
returned there? Inside that method, gathering this additional info is basically 
for free and we may thus speed up some code paths.


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161295428
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpilledSubpartitionView.java
 ---
@@ -163,6 +190,15 @@ public boolean isReleased() {
return parent.isReleased() || isReleased.get();
}
 
+   @Override
+   public boolean nextBufferIsEvent() {
+   if (nextBuffer != null) {
+   return !nextBuffer.isBuffer();
+   }
+
+   return false;
--- End diff --

I'm afraid relying on `nextBuffer` won't be enough because it might not 
have been there during `getNextBuffer()` but may be there now.

Please add a test for this case.


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161242036
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -49,10 +50,24 @@
 
private volatile ResultSubpartitionView subpartitionView;
 
+   /**
+* The status indicating whether this reader is already enqueued in the 
pipeline for transferring
+* data or not. It is mainly used for avoid registering this reader to 
the pipeline repeatedly.
+*/
+   private boolean isRegisteredAvailable;
--- End diff --



I know it's default-initialized to false but let's make this explicit



---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161295951
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SpillableSubpartitionView.java
 ---
@@ -199,6 +199,19 @@ public boolean isReleased() {
}
}
 
+   @Override
+   public boolean nextBufferIsEvent() {
+   if (nextBuffer != null) {
+   return !nextBuffer.isBuffer();
+   }
+
+   if (spilledView != null) {
--- End diff --

`checkState(spilledView != null, "No in-memory buffers available, but also 
nothing spilled.");` just like in `getNextBuffer()`?


---


[GitHub] flink pull request #4552: [FLINK-7456][network] Implement Netty sender incom...

2018-01-12 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4552#discussion_r161287607
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/SequenceNumberingViewReader.java
 ---
@@ -77,6 +92,46 @@ void requestSubpartitionView(
}
}
 
+   /**
+* The credits from consumer are added in incremental way.
+*
+* @param creditDeltas The credit deltas
+*/
+   public void addCredit(int creditDeltas) {
+   numCreditsAvailable += creditDeltas;
+   }
+
+   /**
+* Updates the value to indicate whether the reader is enqueued in the 
pipeline or not.
+*
+* @param isRegisteredAvailable True if this reader is already enqueued 
in the pipeline.
+*/
+   public void notifyAvailabilityChanged(boolean isRegisteredAvailable) {
+   this.isRegisteredAvailable = isRegisteredAvailable;
+   }
+
+   public boolean isRegisteredAvailable() {
+   return isRegisteredAvailable;
+   }
+
+   /**
+* Check whether this reader is available or not.
+*
+* Return true only if the next buffer is event or the reader has 
both available
+* credits and buffers.
+*/
+   public boolean isAvailable() {
+   if (numBuffersAvailable.get() <= 0) {
+   return false;
+   }
+
+   if (subpartitionView.nextBufferIsEvent() || numCreditsAvailable 
> 0) {
--- End diff --

one more thing: in the special case of using it inside `getNextBuffer()`, 
we already retrieved the `remaining` number of buffers so we can spare one 
lookup into an atomic integer here by passing this one in


---


[jira] [Commented] (FLINK-7738) Create WebSocket handler (server)

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324419#comment-16324419
 ] 

ASF GitHub Bot commented on FLINK-7738:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/4767
  
@tillrohrmann are you still interested in this websocket code for the REST 
server?   Aside from rebasing, any 'must fix' issues here?


> Create WebSocket handler (server)
> -
>
> Key: FLINK-7738
> URL: https://issues.apache.org/jira/browse/FLINK-7738
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management, Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>
> An abstract handler is needed to support websocket communication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4767: [FLINK-7738] [flip-6] Create WebSocket handler (server, c...

2018-01-12 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/4767
  
@tillrohrmann are you still interested in this websocket code for the REST 
server?   Aside from rebasing, any 'must fix' issues here?


---


[jira] [Created] (FLINK-8425) SpilledSubpartitionView not protected against concurrent release calls

2018-01-12 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-8425:
--

 Summary: SpilledSubpartitionView not protected against concurrent 
release calls
 Key: FLINK-8425
 URL: https://issues.apache.org/jira/browse/FLINK-8425
 Project: Flink
  Issue Type: Bug
  Components: Network
Affects Versions: 1.3.2, 1.4.0, 1.3.1, 1.2.1, 1.3.0, 1.1.4, 1.1.3, 1.1.2, 
1.1.1, 1.2.0, 1.0.3, 1.0.2, 1.0.1, 1.1.0
Reporter: Nico Kruber
Priority: Minor


It seems like {{SpilledSubpartitionView}} is not protected against concurrently 
calling {{releaseAllResources}} as the other {{ResultSubpartitionView}} 
implementations. These may happen due to failures, e.g. network channels 
breaking, and will probably only result in some unexpected exceptions being 
thrown, e.g. from reading from a closed file reader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5112: [FLINK-8175] [DataStream API java/scala] remove flink-str...

2018-01-12 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5112
  
@StephanEwen  let me know if this is good to go


---


[jira] [Commented] (FLINK-8175) remove flink-streaming-contrib and migrate its classes to flink-streaming-java/scala

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324323#comment-16324323
 ] 

ASF GitHub Bot commented on FLINK-8175:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5112
  
@StephanEwen  let me know if this is good to go


> remove flink-streaming-contrib and migrate its classes to 
> flink-streaming-java/scala
> 
>
> Key: FLINK-8175
> URL: https://issues.apache.org/jira/browse/FLINK-8175
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>
> I propose removing flink-streaming-contrib from flink-contrib, and migrating 
> its classes to flink-streaming-java/scala for the following reasons:
> - flink-streaming-contrib is so small that it only has 4 classes (3 java and 
> 1 scala), and they don't need a dedicated jar for Flink to distribute and 
> maintain it and for users to deal with the overhead of dependency management
> - the 4 classes in flink-streaming-contrib are logically more tied to 
> flink-streaming-java/scala, and thus can be easily migrated
> - flink-contrib is already too crowded and noisy. It contains lots of sub 
> modules with different purposes which confuse developers and users, and they 
> lack a proper project hierarchy
> To take a step even forward, I would argue that even flink-contrib should be 
> removed and all its submodules should be migrated to other top-level modules 
> for the following reasons: 1) Apache Flink the whole project itself is a 
> result of contributions from many developers, there's no reason to highlight 
> some contributions in a dedicated module named 'contrib' 2) flink-contrib 
> inherently doesn't have a good hierarchy to hold submodules



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8424) [Cassandra Connector] Update Cassandra version to last one

2018-01-12 Thread Joao Boto (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joao Boto updated FLINK-8424:
-
Component/s: Cassandra Connector

> [Cassandra Connector] Update Cassandra version to last one
> --
>
> Key: FLINK-8424
> URL: https://issues.apache.org/jira/browse/FLINK-8424
> Project: Flink
>  Issue Type: Improvement
>  Components: Cassandra Connector
>Reporter: Joao Boto
>
> Cassandra connector are using a version outdated
> This is to upgrade the cassandra version to something new
> https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8424) [Cassandra Connector] Update Cassandra version to last one

2018-01-12 Thread Joao Boto (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joao Boto updated FLINK-8424:
-
Description: 
Cassandra connector are using a version outdated

This is to upgrade the cassandra version to something new
https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.1



  was:
Cassandra connector are using a version release in the beginning of 2016

This is to upgrade the cassandra version to something new



> [Cassandra Connector] Update Cassandra version to last one
> --
>
> Key: FLINK-8424
> URL: https://issues.apache.org/jira/browse/FLINK-8424
> Project: Flink
>  Issue Type: Improvement
>Reporter: Joao Boto
>
> Cassandra connector are using a version outdated
> This is to upgrade the cassandra version to something new
> https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5290: [Flink-8424][Cassandra Connector] upgrade of cassandra ve...

2018-01-12 Thread eskabetxe
Github user eskabetxe commented on the issue:

https://github.com/apache/flink/pull/5290
  
@twalthr 
@fhueske 


---


[GitHub] flink pull request #5290: [Flink-8424][Cassandra Connector] upgrade of cassa...

2018-01-12 Thread eskabetxe
GitHub user eskabetxe opened a pull request:

https://github.com/apache/flink/pull/5290

[Flink-8424][Cassandra Connector] upgrade of cassandra version and driver 
to latest

## What is the purpose of the change

Update Cassandra version and driver to something new,
the driver is current in 3 minor versions behind
the version is 1 major version behind

## Verifying this change

This change is already covered by existing tests, such as *(please describe 
tests)*.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): yes
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
  - The serializers: no
  - The runtime per-record code paths (performance sensitive): no
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector: no

## Documentation

  - Does this pull request introduce a new feature?  no


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/eskabetxe/flink FLINK-8424

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5290


commit 7e7dd921cf9112c1c2174890f460f47529c8dcb4
Author: eskabetxe 
Date:   2018-01-12T17:37:06Z

FLINK-8424: upgrade cassandra driver and version to latest




---


[jira] [Updated] (FLINK-8424) [Cassandra Connector] Update Cassandra version to last one

2018-01-12 Thread Joao Boto (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joao Boto updated FLINK-8424:
-
Priority: Major  (was: Critical)

> [Cassandra Connector] Update Cassandra version to last one
> --
>
> Key: FLINK-8424
> URL: https://issues.apache.org/jira/browse/FLINK-8424
> Project: Flink
>  Issue Type: Improvement
>Reporter: Joao Boto
>
> Cassandra connector are using a version release in the beginning of 2016
> This is to upgrade the cassandra version to something new



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8424) [Cassandra Connector] Update Cassandra version to last one

2018-01-12 Thread Joao Boto (JIRA)
Joao Boto created FLINK-8424:


 Summary: [Cassandra Connector] Update Cassandra version to last one
 Key: FLINK-8424
 URL: https://issues.apache.org/jira/browse/FLINK-8424
 Project: Flink
  Issue Type: Improvement
Reporter: Joao Boto
Priority: Critical


Cassandra connector are using a version release in the beginning of 2016

This is to upgrade the cassandra version to something new




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration

2018-01-12 Thread flora karniav (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324228#comment-16324228
 ] 

flora karniav commented on FLINK-8414:
--

Thank you for your reply, 

I am running the ConnectedComponents and PageRank algorithms from Gelly 
examples on two SNAP datasets:

1) https://snap.stanford.edu/data/egonets-Twitter.html - 81,306 vertices and 
2,420,766 edges.
2) https://snap.stanford.edu/data/com-Youtube.html - 1,134,890 vertices and 
2,987,624 edges.

I also want to point out that I looked into CPU utilization when changing the 
parallelism level and it seems to grow as expected, however performance is 
still reduced. 

(I am sorry if I posted in an inappropriate section but thought of the issue 
bizarre enough to be configuration or bug-related.)

> Gelly performance seriously decreases when using the suggested parallelism 
> configuration
> 
>
> Key: FLINK-8414
> URL: https://issues.apache.org/jira/browse/FLINK-8414
> Project: Flink
>  Issue Type: Bug
>  Components: Configuration, Documentation, Gelly
>Reporter: flora karniav
>Priority: Minor
>
> I am running Gelly examples with different datasets in a cluster of 5 
> machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each.
> The number of Slots parameter is set to 32 (as suggested) and the parallelism 
> to 128 (32 cores*4 taskmanagers).
> I observe a vast performance degradation using these suggested settings than 
> setting parallelism.default to 16 for example were the same job completes at 
> ~60 seconds vs ~140 in the 128 parallelism case.
> Is there something wrong in my configuration? Should I decrease parallelism 
> and -if so- will this inevitably decrease CPU utilization?
> Another matter that may be related to this is the number of partitions of the 
> data. Is this somehow related to parallelism? How many partitions are created 
> in the case of parallelism.default=128? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8423) OperatorChain#pushToOperator catch block may fail with NPE

2018-01-12 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-8423:
---

 Summary: OperatorChain#pushToOperator catch block may fail with NPE
 Key: FLINK-8423
 URL: https://issues.apache.org/jira/browse/FLINK-8423
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.4.0, 1.5.0
Reporter: Chesnay Schepler
Priority: Minor


{code}
@Override
protected  void pushToOperator(StreamRecord record) {
try {
// we know that the given outputTag matches our OutputTag so 
the record
// must be of the type that our operator (and Serializer) 
expects.
@SuppressWarnings("unchecked")
StreamRecord castRecord = (StreamRecord) record;

numRecordsIn.inc();
StreamRecord copy = 
castRecord.copy(serializer.copy(castRecord.getValue()));
operator.setKeyContextElement1(copy);
operator.processElement(copy);
} catch (ClassCastException e) {
// Enrich error message
ClassCastException replace = new ClassCastException(
String.format(
"%s. Failed to push OutputTag with id '%s' to 
operator. " +
"This can occur when multiple OutputTags with 
different types " +
"but identical names are being used.",
e.getMessage(),
outputTag.getId()));

throw new ExceptionInChainedOperatorException(replace);

} catch (Exception e) {
throw new ExceptionInChainedOperatorException(e);
}
}
{code}

If outputTag is null (as is the case when no sideOutput was defined) the catch 
block will crash with a NullPointerException. This may happen if 
{{operator.processElement}} throws a ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-8417) Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer

2018-01-12 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324202#comment-16324202
 ] 

Tzu-Li (Gordon) Tai edited comment on FLINK-8417 at 1/12/18 4:36 PM:
-

>From the Javadocs, it mentions that the 
>{{STSAssumeRoleSessionCredentialsProvider}} spawns a background thread that 
>automatically refreshes the credentials. I assume then that is handled 
>out-of-the-box by the AWS API?


was (Author: tzulitai):
Fro the Javadocs, it mentions that the 
{{STSAssumeRoleSessionCredentialsProvider}} spawns a background thread that 
automatically refreshes the credentials. I assume then that is handled 
out-of-the-box by the AWS API?

> Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer
> ---
>
> Key: FLINK-8417
> URL: https://issues.apache.org/jira/browse/FLINK-8417
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.5.0
>
>
> As discussed in ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-Connectors-With-Temporary-Credentials-td17734.html.
> Users need the functionality to access cross-account AWS Kinesis streams, 
> using AWS Temporary Credentials [1].
> We should add support for 
> {{AWSConfigConstants.CredentialsProvider.STSAssumeRole}}, which internally 
> would use the {{STSAssumeRoleSessionCredentialsProvider}} [2] in 
> {{AWSUtil#getCredentialsProvider(Properties)}}.
> [1] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
> [2] 
> https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleSessionCredentialsProvider.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8417) Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer

2018-01-12 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324204#comment-16324204
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8417:


The Role ARN for the Temporary Credentials should be supplied via the consumer 
config properties.

> Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer
> ---
>
> Key: FLINK-8417
> URL: https://issues.apache.org/jira/browse/FLINK-8417
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.5.0
>
>
> As discussed in ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-Connectors-With-Temporary-Credentials-td17734.html.
> Users need the functionality to access cross-account AWS Kinesis streams, 
> using AWS Temporary Credentials [1].
> We should add support for 
> {{AWSConfigConstants.CredentialsProvider.STSAssumeRole}}, which internally 
> would use the {{STSAssumeRoleSessionCredentialsProvider}} [2] in 
> {{AWSUtil#getCredentialsProvider(Properties)}}.
> [1] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
> [2] 
> https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleSessionCredentialsProvider.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8417) Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer

2018-01-12 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324202#comment-16324202
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8417:


Fro the Javadocs, it mentions that the 
{{STSAssumeRoleSessionCredentialsProvider}} spawns a background thread that 
automatically refreshes the credentials. I assume then that is handled 
out-of-the-box by the AWS API?

> Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer
> ---
>
> Key: FLINK-8417
> URL: https://issues.apache.org/jira/browse/FLINK-8417
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.5.0
>
>
> As discussed in ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-Connectors-With-Temporary-Credentials-td17734.html.
> Users need the functionality to access cross-account AWS Kinesis streams, 
> using AWS Temporary Credentials [1].
> We should add support for 
> {{AWSConfigConstants.CredentialsProvider.STSAssumeRole}}, which internally 
> would use the {{STSAssumeRoleSessionCredentialsProvider}} [2] in 
> {{AWSUtil#getCredentialsProvider(Properties)}}.
> [1] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
> [2] 
> https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleSessionCredentialsProvider.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8422) Checkstyle for org.apache.flink.api.java.tuple

2018-01-12 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-8422:
-

 Summary: Checkstyle for org.apache.flink.api.java.tuple
 Key: FLINK-8422
 URL: https://issues.apache.org/jira/browse/FLINK-8422
 Project: Flink
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.5.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Trivial


Update {{TupleGenerator}} for Flink's checkstyle and rebuild {{Tuple}} and 
{{TupleBuilder}} classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-8417) Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer

2018-01-12 Thread Sreenath Kodedala (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324096#comment-16324096
 ] 

Sreenath Kodedala edited comment on FLINK-8417 at 1/12/18 4:33 PM:
---

STSAssumeRole requires Role ARN(Mandatory) to assume and RoleSessionName (Can 
set to some default value if not provided) and one thing to consider is these 
Temporary credentials are set to expire in an hour (Max). How can this be 
handled without blocking the data flow?


was (Author: vedarad):
STSAssumeRole requires Role ARN to assume and RoleSessionName (Can set to some 
default value if not provided) and one thing to consider is these Temporary 
credentials are set to expire in an hour (Max). How can this be handled without 
blocking the data flow?

> Support STSAssumeRoleSessionCredentialsProvider in FlinkKinesisConsumer
> ---
>
> Key: FLINK-8417
> URL: https://issues.apache.org/jira/browse/FLINK-8417
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
> Fix For: 1.5.0
>
>
> As discussed in ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-Connectors-With-Temporary-Credentials-td17734.html.
> Users need the functionality to access cross-account AWS Kinesis streams, 
> using AWS Temporary Credentials [1].
> We should add support for 
> {{AWSConfigConstants.CredentialsProvider.STSAssumeRole}}, which internally 
> would use the {{STSAssumeRoleSessionCredentialsProvider}} [2] in 
> {{AWSUtil#getCredentialsProvider(Properties)}}.
> [1] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
> [2] 
> https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleSessionCredentialsProvider.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8421) HeapInternalTimerService should reconfigure compatible key / namespace serializers on restore

2018-01-12 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-8421:
--

 Summary: HeapInternalTimerService should reconfigure compatible 
key / namespace serializers on restore
 Key: FLINK-8421
 URL: https://issues.apache.org/jira/browse/FLINK-8421
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.4.0, 1.5.0
Reporter: Tzu-Li (Gordon) Tai
Priority: Blocker
 Fix For: 1.3.3, 1.5.0, 1.4.1


The {{HeapInternalTimerService}} still uses simple {{equals}} checks on 
restored / newly provided serializers for compatibility checks. This should be 
replaced with the {{TypeSerializer::ensureCompatibility}} checks instead, so 
that new serializers can be reconfigured.

For Flink 1.4.0 release and current master, this is a critical bug since the 
{{KryoSerializer}} has different default base registrations than before due to 
FLINK-7420. i.e if the key of a window is serialized using the 
{{KryoSerializer}} in 1.3.x, the restore would never succeed in 1.4.0.

For 1.3.x, this fix would be an improvement, such that the 
{{HeapInternalTimerService}} restore will make use of serializer 
reconfiguration.

Other remarks:
* We need to double check all operators that checkpoint / restore from **raw** 
state. Apparently, the serializer compatibility checks were only implemented 
for managed state.
* Migration ITCases apparently do not have enough coverage. A migration test 
job that uses a key type which required the {{KryoSerializer}}, and uses 
windows, would have caught this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5289: [hotfix] [docs] Fix typos

2018-01-12 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/5289

[hotfix] [docs] Fix typos

From the IntelliJ `Typos` inspection.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 20180112a_fix_typos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5289


commit 5658c12c8aafb47654e059bfe7712aa06f78e083
Author: Greg Hogan 
Date:   2018-01-12T14:29:28Z

[hotfix] [docs] Fix typos




---


[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration

2018-01-12 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324133#comment-16324133
 ] 

Greg Hogan commented on FLINK-8414:
---

This is more of a question than a reported bug and may be more appropriate for 
the flink-user mailing list. Are you able to share what algorithm(s) you are 
running and describe the dataset(s)?

> Gelly performance seriously decreases when using the suggested parallelism 
> configuration
> 
>
> Key: FLINK-8414
> URL: https://issues.apache.org/jira/browse/FLINK-8414
> Project: Flink
>  Issue Type: Bug
>  Components: Configuration, Documentation, Gelly
>Reporter: flora karniav
>Priority: Minor
>
> I am running Gelly examples with different datasets in a cluster of 5 
> machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each.
> The number of Slots parameter is set to 32 (as suggested) and the parallelism 
> to 128 (32 cores*4 taskmanagers).
> I observe a vast performance degradation using these suggested settings than 
> setting parallelism.default to 16 for example were the same job completes at 
> ~60 seconds vs ~140 in the 128 parallelism case.
> Is there something wrong in my configuration? Should I decrease parallelism 
> and -if so- will this inevitably decrease CPU utilization?
> Another matter that may be related to this is the number of partitions of the 
> data. Is this somehow related to parallelism? How many partitions are created 
> in the case of parallelism.default=128? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324117#comment-16324117
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161249788
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
+   .defaultValue(3);
+
+   /**
+* The timeout for a slot request to be discarded, in milliseconds.
+*/
+   public static final ConfigOption SLOT_REQUEST_TIMEOUT = 
ConfigOptions
+   .key("slotmanager.slot.request-timeout")
--- End diff --

maybe `slotmanager.request-timeout`


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324113#comment-16324113
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161248207
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
--- End diff --

Let's make this option a `Long`.


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324118#comment-16324118
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161249685
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
+   .defaultValue(3);
+
+   /**
+* The timeout for a slot request to be discarded, in milliseconds.
+*/
+   public static final ConfigOption SLOT_REQUEST_TIMEOUT = 
ConfigOptions
+   .key("slotmanager.slot.request-timeout")
+   .defaultValue(60);
+
+   /**
+* The timeout for an idle task manager to be released, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_TIMEOUT = 
ConfigOptions
+   .key("slotmanager.taskmanager.timeout")
--- End diff --

maybe `slotmanager.taskmanager-timeout`


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324119#comment-16324119
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161249883
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
--- End diff --

maybe `slotmanager.rpc-timeout`


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324115#comment-16324115
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161248438
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
+   .defaultValue(3);
+
+   /**
+* The timeout for a slot request to be discarded, in milliseconds.
+*/
+   public static final ConfigOption SLOT_REQUEST_TIMEOUT = 
ConfigOptions
--- End diff --

`Long`


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324114#comment-16324114
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161248457
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
+   .defaultValue(3);
+
+   /**
+* The timeout for a slot request to be discarded, in milliseconds.
+*/
+   public static final ConfigOption SLOT_REQUEST_TIMEOUT = 
ConfigOptions
+   .key("slotmanager.slot.request-timeout")
+   .defaultValue(60);
+
+   /**
+* The timeout for an idle task manager to be released, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_TIMEOUT = 
ConfigOptions
--- End diff --

`Long`


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324116#comment-16324116
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161249010
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
+   .defaultValue(3);
--- End diff --

Please add a description via `withDescription`


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager

2018-01-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324120#comment-16324120
 ] 

ASF GitHub Bot commented on FLINK-8399:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/5271#discussion_r161249992
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ResourceManagerOptions.java
 ---
@@ -58,6 +58,27 @@
.defaultValue(600)
.withDeprecatedKeys("yarn.heap-cutoff-min");
 
+   /**
+* The timeout for requesting slot to a task manager, in milliseconds.
+*/
+   public static final ConfigOption TASK_MANAGER_REQUEST_TIMEOUT 
= ConfigOptions
+   .key("slotmanager.taskmanager.request-timeout")
+   .defaultValue(3);
--- End diff --

For the other options, a description would be great as well.


> Use independent configurations for the different timeouts in slot manager
> -
>
> Key: FLINK-8399
> URL: https://issues.apache.org/jira/browse/FLINK-8399
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.5.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>  Labels: flip-6
>
> There are three parameter in slot manager to indicate the timeout for slot 
> request to task manager, slot request to be discarded and task manager to be 
> released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, 
> need to use independent configurations for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >