[GitHub] flink issue #2857: [FLINK-5146] Improved resource cleanup in RocksDB keyed s...

2016-12-03 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2857
  
Thanks for your work: 冒聼聭聧 

I merged it, could you please close this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5146) Improved resource cleanup in RocksDB keyed state backend

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717667#comment-15717667
 ] 

ASF GitHub Bot commented on FLINK-5146:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/2857
  
Thanks for your work: 馃憤 

I merged it, could you please close this PR?


> Improved resource cleanup in RocksDB keyed state backend
> 
>
> Key: FLINK-5146
> URL: https://issues.apache.org/jira/browse/FLINK-5146
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.2.0
>
>
> Currently, the resources such as taken snapshots or iterators are not always 
> cleaned up in the RocksDB state backend. In particular, not starting the 
> runnable future will leave taken snapshots unreleased.
> We should improve the releases of all resources allocated through the RocksDB 
> JNI bridge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2857: [FLINK-5146] Improved resource cleanup in RocksDB ...

2016-12-03 Thread StefanRRichter
Github user StefanRRichter closed the pull request at:

https://github.com/apache/flink/pull/2857


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2857: [FLINK-5146] Improved resource cleanup in RocksDB keyed s...

2016-12-03 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/2857
  
Thanks for the reviews and merging, @aljoscha @tillrohrmann . Closing this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5146) Improved resource cleanup in RocksDB keyed state backend

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717789#comment-15717789
 ] 

ASF GitHub Bot commented on FLINK-5146:
---

Github user StefanRRichter closed the pull request at:

https://github.com/apache/flink/pull/2857


> Improved resource cleanup in RocksDB keyed state backend
> 
>
> Key: FLINK-5146
> URL: https://issues.apache.org/jira/browse/FLINK-5146
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.2.0
>
>
> Currently, the resources such as taken snapshots or iterators are not always 
> cleaned up in the RocksDB state backend. In particular, not starting the 
> runnable future will leave taken snapshots unreleased.
> We should improve the releases of all resources allocated through the RocksDB 
> JNI bridge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5146) Improved resource cleanup in RocksDB keyed state backend

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717788#comment-15717788
 ] 

ASF GitHub Bot commented on FLINK-5146:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/2857
  
Thanks for the reviews and merging, @aljoscha @tillrohrmann . Closing this.


> Improved resource cleanup in RocksDB keyed state backend
> 
>
> Key: FLINK-5146
> URL: https://issues.apache.org/jira/browse/FLINK-5146
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.2.0
>
>
> Currently, the resources such as taken snapshots or iterators are not always 
> cleaned up in the RocksDB state backend. In particular, not starting the 
> runnable future will leave taken snapshots unreleased.
> We should improve the releases of all resources allocated through the RocksDB 
> JNI bridge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2898: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-12-03 Thread Hapcy
Github user Hapcy commented on the issue:

https://github.com/apache/flink/pull/2898
  
I would like to ask when is a merge expected? Do I have to do anything else?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717899#comment-15717899
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user Hapcy commented on the issue:

https://github.com/apache/flink/pull/2898
  
I would like to ask when is a merge expected? Do I have to do anything else?


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: M贸ger Tibor L谩szl贸
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2928: [FLINK-5108] Remove ClientShutdownHook during job ...

2016-12-03 Thread Renkai
GitHub user Renkai opened a pull request:

https://github.com/apache/flink/pull/2928

[FLINK-5108] Remove ClientShutdownHook during job execution

This patch simply removed ClientShutdownHook  related code. The changes may 
cause `org.apache.flink.yarn.YarnClusterClient#pollingRunner` be brutely stoped 
by processing exit, but it seems ok because the polling runner thread is a 
daemon thread.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Renkai/flink FLINK-5108

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2928.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2928


commit 7504d57b2e24f70b96c0761102b689bf62653db5
Author: renkai 
Date:   2016-12-03T11:27:39Z

remove ClientShutdownHook




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5108) Remove ClientShutdownHook during job execution

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718274#comment-15718274
 ] 

ASF GitHub Bot commented on FLINK-5108:
---

GitHub user Renkai opened a pull request:

https://github.com/apache/flink/pull/2928

[FLINK-5108] Remove ClientShutdownHook during job execution

This patch simply removed ClientShutdownHook  related code. The changes may 
cause `org.apache.flink.yarn.YarnClusterClient#pollingRunner` be brutely stoped 
by processing exit, but it seems ok because the polling runner thread is a 
daemon thread.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Renkai/flink FLINK-5108

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2928.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2928


commit 7504d57b2e24f70b96c0761102b689bf62653db5
Author: renkai 
Date:   2016-12-03T11:27:39Z

remove ClientShutdownHook




> Remove ClientShutdownHook during job execution
> --
>
> Key: FLINK-5108
> URL: https://issues.apache.org/jira/browse/FLINK-5108
> Project: Flink
>  Issue Type: Bug
>  Components: YARN Client
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Maximilian Michels
>Assignee: Renkai Ge
>Priority: Blocker
> Fix For: 1.2.0
>
>
> The behavior of the Standalone mode is to not react to client interrupts once 
> a job has been deployed. We should change the Yarn client implementation to 
> behave the same. This avoids accidental shutdown of the job, e.g. when the 
> user sends an interrupt via CTRL-C or when the client machine shuts down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5242) Implement Scala API for BipartiteGraph

2016-12-03 Thread Ivan Mushketyk (JIRA)
Ivan Mushketyk created FLINK-5242:
-

 Summary: Implement Scala API for BipartiteGraph
 Key: FLINK-5242
 URL: https://issues.apache.org/jira/browse/FLINK-5242
 Project: Flink
  Issue Type: New Feature
  Components: Gelly
Reporter: Ivan Mushketyk
Assignee: Ivan Mushketyk


Should implement BipartiteGraph in flink-gelly-scala project similarly to Graph 
class.

Depends on this: https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5243) Implement an example for BipartiteGraph

2016-12-03 Thread Ivan Mushketyk (JIRA)
Ivan Mushketyk created FLINK-5243:
-

 Summary: Implement an example for BipartiteGraph
 Key: FLINK-5243
 URL: https://issues.apache.org/jira/browse/FLINK-5243
 Project: Flink
  Issue Type: New Feature
  Components: Gelly
Reporter: Ivan Mushketyk


Should implement example for BipartiteGraph in gelly-examples project similarly 
to examples for Graph class.

Depends on this: https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5244) Implement methods for BipartiteGraph transformations

2016-12-03 Thread Ivan Mushketyk (JIRA)
Ivan Mushketyk created FLINK-5244:
-

 Summary: Implement methods for BipartiteGraph transformations
 Key: FLINK-5244
 URL: https://issues.apache.org/jira/browse/FLINK-5244
 Project: Flink
  Issue Type: Improvement
  Components: Gelly
Reporter: Ivan Mushketyk


BipartiteGraph should implement methods for transforming graph, like map, 
filter, join, union, difference, etc. similarly to Graph class.

Depends on: https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5245) Add support for BipartiteGraph mutations

2016-12-03 Thread Ivan Mushketyk (JIRA)
Ivan Mushketyk created FLINK-5245:
-

 Summary: Add support for BipartiteGraph mutations
 Key: FLINK-5245
 URL: https://issues.apache.org/jira/browse/FLINK-5245
 Project: Flink
  Issue Type: Improvement
  Components: Gelly
Reporter: Ivan Mushketyk


Implement methods for adding and removing vertices and edges similarly to Graph 
class.

Depends on https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2898: [FLINK-5109] fix invalid content-encoding header of webmo...

2016-12-03 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2898
  
Unless a PR is urgent we typically allow a few days to see if any 
additional comments come in from the community. This looks good so I'm merging 
it now. Thanks for the contribution!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718416#comment-15718416
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2898
  
Unless a PR is urgent we typically allow a few days to see if any 
additional comments come in from the community. This looks good so I'm merging 
it now. Thanks for the contribution!


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: M贸ger Tibor L谩szl贸
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2898: [FLINK-5109] fix invalid content-encoding header o...

2016-12-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2898


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718419#comment-15718419
 ] 

ASF GitHub Bot commented on FLINK-5109:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2898


> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: M贸ger Tibor L谩szl贸
>  Labels: http-headers, rest_api
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-12-03 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan updated FLINK-5109:
--
Fix Version/s: 1.2.0

> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: M贸ger Tibor L谩szl贸
>  Labels: http-headers, rest_api
> Fix For: 1.2.0
>
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5109) Invalid Content-Encoding Header in REST API responses

2016-12-03 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-5109.
-
Resolution: Fixed

Fixed in 08e7ba4920b9b44dc15269e4f507d89025209937

> Invalid Content-Encoding Header in REST API responses
> -
>
> Key: FLINK-5109
> URL: https://issues.apache.org/jira/browse/FLINK-5109
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client, Webfrontend
>Affects Versions: 1.1.0, 1.2.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: M贸ger Tibor L谩szl贸
>  Labels: http-headers, rest_api
> Fix For: 1.2.0
>
>
> On REST API calls the Flink runtime responds with the header 
> Content-Encoding, containing the value "utf-8". According to the HTTP/1.1 
> standard this header is invalid. ( 
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 ) 
> Possible acceptable values are: gzip, compress, deflate. Or it should be 
> omitted.
> The invalid header may cause malfunction in projects building against Flink.
> The invalid header may be present in earlier versions aswell.
> Proposed solution: Remove lines from the project, where CONTENT_ENCODING 
> header is set to "utf-8". (I could do this in a PR.)
> Possible solution but may need further knowledge and skills than mine: 
> Introduce content-encoding. Doing so may need some configuration beacuse then 
> Flink would have to encode the responses properly (even paying attention to 
> the request's Accept-Encoding headers).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5246) Don't discard unknown checkpoint messages in the CheckpointCoordinator

2016-12-03 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-5246:


 Summary: Don't discard unknown checkpoint messages in the 
CheckpointCoordinator
 Key: FLINK-5246
 URL: https://issues.apache.org/jira/browse/FLINK-5246
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.1.4
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.1.4


The delicate interplay of the {{CheckpointCoordinator}} and the 
{{SavepointCoordinator}} requires that unknown checkpoint messages are not 
discarded but given to the other coordinator. If both coordinator don't accept 
the checkpoint message, then the associated state will be discarded by the 
{{JobManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5247) Setting allowedLateness to a non-zero value should throw exception for processing-time windows

2016-12-03 Thread Rohit Agarwal (JIRA)
Rohit Agarwal created FLINK-5247:


 Summary: Setting allowedLateness to a non-zero value should throw 
exception for processing-time windows
 Key: FLINK-5247
 URL: https://issues.apache.org/jira/browse/FLINK-5247
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 1.1.3, 1.1.2, 1.1.1, 1.1.0
Reporter: Rohit Agarwal


Related to FLINK-3714 and FLINK-4239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2929: [FLINK-5247] Fix check to make sure that we throw ...

2016-12-03 Thread mindprince
GitHub user mindprince opened a pull request:

https://github.com/apache/flink/pull/2929

[FLINK-5247] Fix check to make sure that we throw error when allowed 
lateness is set for non event-time windows.

Also, fix outdated documentation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mindprince/flink FLINK-5247-allowed-lateness

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2929.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2929


commit 10d523865e07df61e5a1c2b35e2201b498c46426
Author: Rohit Agarwal 
Date:   2016-12-03T20:15:45Z

[FLINK-5247] Fix check to make sure that we throw error when allowed 
lateness is set for non event-time windows.

Also, fix outdated documentation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5247) Setting allowedLateness to a non-zero value should throw exception for processing-time windows

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718675#comment-15718675
 ] 

ASF GitHub Bot commented on FLINK-5247:
---

GitHub user mindprince opened a pull request:

https://github.com/apache/flink/pull/2929

[FLINK-5247] Fix check to make sure that we throw error when allowed 
lateness is set for non event-time windows.

Also, fix outdated documentation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mindprince/flink FLINK-5247-allowed-lateness

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2929.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2929


commit 10d523865e07df61e5a1c2b35e2201b498c46426
Author: Rohit Agarwal 
Date:   2016-12-03T20:15:45Z

[FLINK-5247] Fix check to make sure that we throw error when allowed 
lateness is set for non event-time windows.

Also, fix outdated documentation.




> Setting allowedLateness to a non-zero value should throw exception for 
> processing-time windows
> --
>
> Key: FLINK-5247
> URL: https://issues.apache.org/jira/browse/FLINK-5247
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.1.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Rohit Agarwal
>
> Related to FLINK-3714 and FLINK-4239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5247) Setting allowedLateness to a non-zero value should throw exception for processing-time windows

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718679#comment-15718679
 ] 

ASF GitHub Bot commented on FLINK-5247:
---

Github user mindprince commented on the issue:

https://github.com/apache/flink/pull/2929
  
cc - @kl0u, @aljoscha for review


> Setting allowedLateness to a non-zero value should throw exception for 
> processing-time windows
> --
>
> Key: FLINK-5247
> URL: https://issues.apache.org/jira/browse/FLINK-5247
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.1.0, 1.1.1, 1.1.2, 1.1.3
>Reporter: Rohit Agarwal
>
> Related to FLINK-3714 and FLINK-4239



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2929: [FLINK-5247] Fix check to make sure that we throw error w...

2016-12-03 Thread mindprince
Github user mindprince commented on the issue:

https://github.com/apache/flink/pull/2929
  
cc - @kl0u, @aljoscha for review


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2930: [FLINK-5246] Don't discard checkpoint messages if ...

2016-12-03 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/2930

[FLINK-5246] Don't discard checkpoint messages if they are unknown

This is the case if the savepoint coordinator has triggered a checkpoint. 
The corresponding
checkpoint messages are not known to the checkpoint coordinator and thus 
should not be
discarded. Instead, the JobManager will now discard all messages which have 
not been accepted
by neither the CheckpointCoordinator nor the SavepointCoordinator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixCheckpointMessages

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2930.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2930


commit e872b5fbc454e379eab6788bae77c6bf3e2e98af
Author: Till Rohrmann 
Date:   2016-12-03T19:15:35Z

[FLINK-5246] Don't discard checkpoint messages if they are unknown

This is the case if the savepoint coordinator has triggered a checkpoint. 
The corresponding
checkpoint messages are not known to the checkpoint coordinator and thus 
should not be
discarded. Instead, the JobManager will now discard all messages which have 
not been accepted
by neither the CheckpointCoordinator nor the SavepointCoordinator.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5246) Don't discard unknown checkpoint messages in the CheckpointCoordinator

2016-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718697#comment-15718697
 ] 

ASF GitHub Bot commented on FLINK-5246:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/2930

[FLINK-5246] Don't discard checkpoint messages if they are unknown

This is the case if the savepoint coordinator has triggered a checkpoint. 
The corresponding
checkpoint messages are not known to the checkpoint coordinator and thus 
should not be
discarded. Instead, the JobManager will now discard all messages which have 
not been accepted
by neither the CheckpointCoordinator nor the SavepointCoordinator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink fixCheckpointMessages

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2930.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2930


commit e872b5fbc454e379eab6788bae77c6bf3e2e98af
Author: Till Rohrmann 
Date:   2016-12-03T19:15:35Z

[FLINK-5246] Don't discard checkpoint messages if they are unknown

This is the case if the savepoint coordinator has triggered a checkpoint. 
The corresponding
checkpoint messages are not known to the checkpoint coordinator and thus 
should not be
discarded. Instead, the JobManager will now discard all messages which have 
not been accepted
by neither the CheckpointCoordinator nor the SavepointCoordinator.




> Don't discard unknown checkpoint messages in the CheckpointCoordinator
> --
>
> Key: FLINK-5246
> URL: https://issues.apache.org/jira/browse/FLINK-5246
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.4
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.1.4
>
>
> The delicate interplay of the {{CheckpointCoordinator}} and the 
> {{SavepointCoordinator}} requires that unknown checkpoint messages are not 
> discarded but given to the other coordinator. If both coordinator don't 
> accept the checkpoint message, then the associated state will be discarded by 
> the {{JobManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5039) Avro GenericRecord support is broken

2016-12-03 Thread Dave Torok (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15718737#comment-15718737
 ] 

Dave Torok commented on FLINK-5039:
---

I have spent 2 days on this and HAVE THE SOLUTION.

Fix:  Bump Avro version to at least 1.7.7 from the current 1.7.6.

Root Cause.  Within "Schema.class" the "field" position is a TRANSIENT and does 
not get serialized by Kryo!

See https://issues.apache.org/jira/browse/AVRO-1476  and specififcally mentions 
kyro

This was fixed in 1.7.7

This is also the cause for other GenericRecord issues such as the 'union' issue 
mentioned here 
http://stackoverflow.com/questions/37115618/apache-flink-union-operator-giving-wrong-response

PLEASE BUMP THIS ASAP

I have verified the fix in my local machine by replacing the Avro classes 
within the flink-dist_2.11-1.1.3.jar and it corrected my issue.

> Avro GenericRecord support is broken
> 
>
> Key: FLINK-5039
> URL: https://issues.apache.org/jira/browse/FLINK-5039
> Project: Flink
>  Issue Type: Bug
>  Components: Batch Connectors and Input/Output Formats
>Affects Versions: 1.1.3
>Reporter: Bruno Dumon
>Priority: Minor
>
> Avro GenericRecord support was introduced in FLINK-3691, but it seems like 
> the GenericRecords are not properly (de)serialized.
> This can be easily seen with a program like this:
> {noformat}
>   env.createInput(new AvroInputFormat<>(new Path("somefile.avro"), 
> GenericRecord.class))
> .first(10)
> .print();
> {noformat}
> which will print records in which all fields have the same value:
> {noformat}
> {"foo": 1478628723066, "bar": 1478628723066, "baz": 1478628723066, ...}
> {"foo": 1478628723179, "bar": 1478628723179, "baz": 1478628723179, ...}
> {noformat}
> If I'm not mistaken, the AvroInputFormat does essentially 
> TypeExtractor.getForClass(GenericRecord.class), but GenericRecords are not 
> POJOs.
> Furthermore, each GenericRecord contains a pointer to the record schema. I 
> guess the current naive approach will serialize this schema with each record, 
> which is quite inefficient (the schema is typically more complex and much 
> larger than the data). We probably need a TypeInformation and TypeSerializer 
> specific to Avro GenericRecords, which could just use avro serialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)