[jira] [Commented] (FLINK-5398) Exclude generated files in module flink-batch-connectors in license checking

2016-12-29 Thread Xiaogang Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15786803#comment-15786803
 ] 

Xiaogang Shi commented on FLINK-5398:
-

[~fhueske] Thanks for your explanation. It helps.

> Exclude generated files in module flink-batch-connectors in license checking
> 
>
> Key: FLINK-5398
> URL: https://issues.apache.org/jira/browse/FLINK-5398
> Project: Flink
>  Issue Type: Bug
>Reporter: Xiaogang Shi
>
> Now the master branch fails to execute {{mvn install}} due to unlicensed 
> files in the module flink-batch-connectors. We should exclude these generated 
> files in the pom file.
> Unapproved licenses:
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Address.java
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Colors.java
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Fixed16.java
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/User.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3053: [FLINK-5400] Add accessor to folding states in Run...

2016-12-29 Thread shixiaogang
GitHub user shixiaogang opened a pull request:

https://github.com/apache/flink/pull/3053

[FLINK-5400] Add accessor to folding states in RuntimeContext

- Add accessors in RuntimeContext and KeyedStateStore
- Fix errors in the comments for reducing states in RuntimeContext and 
KeyedStateStore

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink flink-5400

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3053.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3053


commit f9d733a60f4809049e778045144528e8aff4a951
Author: xiaogang.sxg 
Date:   2016-12-30T03:25:05Z

Add accessor to folding states in RuntimeContext




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5400) Add accessor to folding states in RuntimeContext

2016-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15786778#comment-15786778
 ] 

ASF GitHub Bot commented on FLINK-5400:
---

GitHub user shixiaogang opened a pull request:

https://github.com/apache/flink/pull/3053

[FLINK-5400] Add accessor to folding states in RuntimeContext

- Add accessors in RuntimeContext and KeyedStateStore
- Fix errors in the comments for reducing states in RuntimeContext and 
KeyedStateStore

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink flink-5400

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3053.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3053


commit f9d733a60f4809049e778045144528e8aff4a951
Author: xiaogang.sxg 
Date:   2016-12-30T03:25:05Z

Add accessor to folding states in RuntimeContext




> Add accessor to folding states in RuntimeContext
> 
>
> Key: FLINK-5400
> URL: https://issues.apache.org/jira/browse/FLINK-5400
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Xiaogang Shi
>
> Now {{RuntimeContext}} does not provide the accessors to folding states. 
> Therefore users cannot use folding states in their rich functions. I think we 
> should provide the missing accessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5391) Unprotected access to shutdown in AbstractNonHaServices#checkNotShutdown()

2016-12-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-5391:
--
Description: 
{code}
  private void checkNotShutdown() {
checkState(!shutdown, "high availability services are shut down");
{code}

Access to shutdown is protected by lock in other places.
The code above should protect with lock as well.

  was:
{code}
  private void checkNotShutdown() {
checkState(!shutdown, "high availability services are shut down");
{code}
Access to shutdown is protected by lock in other places.
The code above should protect with lock as well.


> Unprotected access to shutdown in AbstractNonHaServices#checkNotShutdown()
> --
>
> Key: FLINK-5391
> URL: https://issues.apache.org/jira/browse/FLINK-5391
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   private void checkNotShutdown() {
> checkState(!shutdown, "high availability services are shut down");
> {code}
> Access to shutdown is protected by lock in other places.
> The code above should protect with lock as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5400) Add accessor to folding states in RuntimeContext

2016-12-29 Thread Xiaogang Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaogang Shi updated FLINK-5400:

Description: Now {{RuntimeContext}} does not provide the accessors to 
folding states. Therefore users cannot use folding states in their rich 
functions. I think we should provide the missing accessor.  (was: Now 
{{RuntimeContext}} does provide the accessors to folding states. Therefore 
users cannot use folding states in their rich functions. I think we should 
provide the missing accessor.)

> Add accessor to folding states in RuntimeContext
> 
>
> Key: FLINK-5400
> URL: https://issues.apache.org/jira/browse/FLINK-5400
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Xiaogang Shi
>
> Now {{RuntimeContext}} does not provide the accessors to folding states. 
> Therefore users cannot use folding states in their rich functions. I think we 
> should provide the missing accessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5400) Add accessor to folding states in RuntimeContext

2016-12-29 Thread Xiaogang Shi (JIRA)
Xiaogang Shi created FLINK-5400:
---

 Summary: Add accessor to folding states in RuntimeContext
 Key: FLINK-5400
 URL: https://issues.apache.org/jira/browse/FLINK-5400
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Reporter: Xiaogang Shi


Now {{RuntimeContext}} does provide the accessors to folding states. Therefore 
users cannot use folding states in their rich functions. I think we should 
provide the missing accessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1707) Add an Affinity Propagation Library Method

2016-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15786537#comment-15786537
 ] 

ASF GitHub Bot commented on FLINK-1707:
---

Github user joseprupi commented on the issue:

https://github.com/apache/flink/pull/2885
  
That's the graph for the new commit.


https://docs.google.com/drawings/d/1ixKiCFXXjCT6UMLHroGCbR1LbQpoXhgwBam3b2RrUhA/edit?usp=sharing

The convergence criteria is that all messages fall below a threshold.
Still pending to implement the top 2 values with a combinable reduce.


> Add an Affinity Propagation Library Method
> --
>
> Key: FLINK-1707
> URL: https://issues.apache.org/jira/browse/FLINK-1707
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Josep RubiĆ³
>Priority: Minor
>  Labels: requires-design-doc
> Attachments: Binary_Affinity_Propagation_in_Flink_design_doc.pdf
>
>
> This issue proposes adding the an implementation of the Affinity Propagation 
> algorithm as a Gelly library method and a corresponding example.
> The algorithm is described in paper [1] and a description of a vertex-centric 
> implementation can be found is [2].
> [1]: http://www.psi.toronto.edu/affinitypropagation/FreyDueckScience07.pdf
> [2]: http://event.cwi.nl/grades2014/00-ching-slides.pdf
> Design doc:
> https://docs.google.com/document/d/1QULalzPqMVICi8jRVs3S0n39pell2ZVc7RNemz_SGA4/edit?usp=sharing
> Example spreadsheet:
> https://docs.google.com/spreadsheets/d/1CurZCBP6dPb1IYQQIgUHVjQdyLxK0JDGZwlSXCzBcvA/edit?usp=sharing
> Graph:
> https://docs.google.com/drawings/d/1PC3S-6AEt2Gp_TGrSfiWzkTcL7vXhHSxvM6b9HglmtA/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2885: [FLINK-1707] Affinity propagation

2016-12-29 Thread joseprupi
Github user joseprupi commented on the issue:

https://github.com/apache/flink/pull/2885
  
That's the graph for the new commit.


https://docs.google.com/drawings/d/1ixKiCFXXjCT6UMLHroGCbR1LbQpoXhgwBam3b2RrUhA/edit?usp=sharing

The convergence criteria is that all messages fall below a threshold.
Still pending to implement the top 2 values with a combinable reduce.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3039: [FLINK-5280] Update TableSource to support nested data

2016-12-29 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/3039
  
Hi @fhueske, @wuchong 

I've updated my PR according to your reviews.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5280) Extend TableSource to support nested data

2016-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15786387#comment-15786387
 ] 

ASF GitHub Bot commented on FLINK-5280:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/3039
  
Hi @fhueske, @wuchong 

I've updated my PR according to your reviews.


> Extend TableSource to support nested data
> -
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5360) Fix arguments names in WindowedStream

2016-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15786385#comment-15786385
 ] 

ASF GitHub Bot commented on FLINK-5360:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/3022
  
Hi @xhumanoid 

I've updated my PR according to your review.



> Fix arguments names in WindowedStream
> -
>
> Key: FLINK-5360
> URL: https://issues.apache.org/jira/browse/FLINK-5360
> Project: Flink
>  Issue Type: Bug
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> Should be "field" instead of "positionToMaxBy" in some methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3022: [FLINK-5360] Fix argument names in WindowedStream

2016-12-29 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/3022
  
Hi @xhumanoid 

I've updated my PR according to your review.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3052: Swap the pattern matching order

2016-12-29 Thread Fokko
GitHub user Fokko opened a pull request:

https://github.com/apache/flink/pull/3052

Swap the pattern matching order

Swap the pattern matching order, because `EuclideanDistanceMetric extends 
SquaredEuclideanDistanceMetric extends DistanceMetric`, otherwise the 
EuclideanDistance cannot be executed:

```
[WARNING] 
/Users/fokkodriesprong/Desktop/flink-fokko/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala:106:
 warning: unreachable code
[WARNING] case _: EuclideanDistanceMetric => math.sqrt(minDist)
[WARNING] ^
[WARNING] warning: Class org.apache.log4j.Level not found - continuing with 
a stub.
[WARNING] warning: there were 1 feature warning(s); re-run with -feature 
for details
[WARNING] three warnings found
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Fokko/flink fd-fix-pattern-matching

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3052.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3052


commit 29ebffc77cfbe917796f44764936972b578ebd38
Author: Fokko Driesprong 
Date:   2016-12-29T22:49:14Z

Swap the pattern matching order, because EuclideanDistanceMetric extends 
SquaredEuclideanDistanceMetric extends DistanceMetric, otherwise the 
EuclideanDistance cannot be executed.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3154) Update Kryo version from 2.24.0 to 3.0.3

2016-12-29 Thread Martin Grotzke (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15785348#comment-15785348
 ] 

Martin Grotzke commented on FLINK-3154:
---

As stated in Kryo's changelog, in 3.x the serialization format for 
*Unsafe-based* IO (e.g. {{UnsafeInput}}) changed, not for regular 
{{Input}}/{{Output}} - 
https://github.com/EsotericSoftware/kryo/blob/master/CHANGES.md#compatibility-3

In Kryo 4.x, the serialization format of the FieldSerializer for generic fields 
changed, but the former serialization behaviour/format can be restored via 
{{kryo.getFieldSerializerConfig().setOptimizedGenerics(true);}} - 
https://github.com/EsotericSoftware/kryo/releases/tag/kryo-parent-4.0.0.

Perhaps this helps.

> Update Kryo version from 2.24.0 to 3.0.3
> 
>
> Key: FLINK-3154
> URL: https://issues.apache.org/jira/browse/FLINK-3154
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.0.0
>Reporter: Maximilian Michels
>Priority: Minor
> Fix For: 1.0.0
>
>
> Flink's Kryo version is outdated and could be updated to a newer version, 
> e.g. kryo-3.0.3.
> From ML: we cannot bumping the Kryo version easily - the serialization format 
> changed (that's why they have a new major version), which would render all 
> Flink savepoints and checkpoints incompatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5398) Exclude generated files in module flink-batch-connectors in license checking

2016-12-29 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-5398.

Resolution: Invalid

We recently merged the {{flink-batch-connectors}} and 
{{flink-streaming-connectors}} into one Maven module ({{flink-connectors}} 
(FLINK-4676).

Since the generated Avro files were not checked into the Git repository and 
ignored by Git, they have not been removed when the latest changes were pulled 
in. 
The {{flink-batch-connectors}} folder (and {{flink-streaming-connectors}}) can 
simply be deleted as it is no longer part of the repository.

> Exclude generated files in module flink-batch-connectors in license checking
> 
>
> Key: FLINK-5398
> URL: https://issues.apache.org/jira/browse/FLINK-5398
> Project: Flink
>  Issue Type: Bug
>Reporter: Xiaogang Shi
>
> Now the master branch fails to execute {{mvn install}} due to unlicensed 
> files in the module flink-batch-connectors. We should exclude these generated 
> files in the pom file.
> Unapproved licenses:
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Address.java
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Colors.java
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/Fixed16.java
>   
> flink-batch-connectors/flink-avro/src/test/java/org/apache/flink/api/io/avro/generated/User.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3849) Add FilterableTableSource interface and translation rule

2016-12-29 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15785078#comment-15785078
 ] 

Anton Solovev commented on FLINK-3849:
--

Yes, one rule for pushing them together. But, I`ve just realized your idea, I 
thought it wouldn`t match after one of the rule is applied

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-29 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15785037#comment-15785037
 ] 

Anton Solovev commented on FLINK-5345:
--

1.1.3 RC3 ? I only see 1.1.3 RC2 on github branches

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Anton Solovev
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-29 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5345:


Assignee: Anton Solovev

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Anton Solovev
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)