[jira] [Commented] (FLINK-7279) MiniCluster can deadlock at shut down

2017-07-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16106586#comment-16106586
 ] 

ASF GitHub Bot commented on FLINK-7279:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4416


> MiniCluster can deadlock at shut down
> -
>
> Key: FLINK-7279
> URL: https://issues.apache.org/jira/browse/FLINK-7279
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Nico Kruber
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> The {{MiniCluster}} can deadlock in case if the fatal error handler is called 
> while the {{MiniCluster}} shuts down. The reason is that the shut down 
> happens under a lock which is required by the fatal error handler as well. If 
> now the {{MiniCluster}} tries to shut down the underlying RPC service which 
> waits for all actors to terminate, it will never complete because one actor 
> is still waiting for the lock.
> One solution would be to ignore the fatal error handler calls if the 
> {{MiniCluster}} is shutting down.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/257811319/log.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7279) MiniCluster can deadlock at shut down

2017-07-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16105146#comment-16105146
 ] 

ASF GitHub Bot commented on FLINK-7279:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/4416
  
ok, I think, I understood the intention of `TaskExecutor#onFatalErrorAsync` 
wrong - it is meant for outside `TaskExecutor` thread calls to run the error 
handler inside the `TaskExecutor` thread...

I'll create a new approach


> MiniCluster can deadlock at shut down
> -
>
> Key: FLINK-7279
> URL: https://issues.apache.org/jira/browse/FLINK-7279
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>  Labels: flip-6
>
> The {{MiniCluster}} can deadlock in case if the fatal error handler is called 
> while the {{MiniCluster}} shuts down. The reason is that the shut down 
> happens under a lock which is required by the fatal error handler as well. If 
> now the {{MiniCluster}} tries to shut down the underlying RPC service which 
> waits for all actors to terminate, it will never complete because one actor 
> is still waiting for the lock.
> One solution would be to ignore the fatal error handler calls if the 
> {{MiniCluster}} is shutting down.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/257811319/log.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7279) MiniCluster can deadlock at shut down

2017-07-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16105069#comment-16105069
 ] 

ASF GitHub Bot commented on FLINK-7279:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/4416
  
A simple asynchronous call as in `TaskExecutor#onFatalErrorAsync()` is not 
enough though because this is what is actually already done and led to me 
finding this error in the first place. Please see the stack traces of the 
incriminating processes below:

```
"flink-akka.actor.default-dispatcher-8" #31 prio=5 os_prio=0 
tid=0x7ffa00efe800 nid=0xb8c waiting for monitor entry [0x7ff9ee54]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.flink.runtime.minicluster.MiniCluster$TerminatingFatalErrorHandler.onFatalError(MiniCluster.java:652)
- waiting to lock <0xaad1d2d8> (a java.lang.Object)
at 
org.apache.flink.runtime.taskexecutor.TaskExecutor.onFatalError(TaskExecutor.java:1129)
at 
org.apache.flink.runtime.taskexecutor.TaskExecutor$7.run(TaskExecutor.java:1116)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:278)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:132)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.access$000(AkkaRpcActor.java:73)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$1.apply(AkkaRpcActor.java:111)
at 
akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:534)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

"main" #1 prio=5 os_prio=0 tid=0x7ffaa000 nid=0xb56 waiting on 
condition [0x7ffa07fa1000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xab269c30> (a 
java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at 
akka.actor.ActorSystemImpl$TerminationCallbacks.ready(ActorSystem.scala:819)
at 
akka.actor.ActorSystemImpl$TerminationCallbacks.ready(ActorSystem.scala:788)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at 
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.ready(package.scala:169)
at akka.actor.ActorSystemImpl.awaitTermination(ActorSystem.scala:644)
at akka.actor.ActorSystemImpl.awaitTermination(ActorSystem.scala:645)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcService.stopService(AkkaRpcService.java:282)
at 
org.apache.flink.runtime.minicluster.MiniCluster.shutDownRpc(MiniCluster.java:596)
at 
org.apache.flink.runtime.minicluster.MiniCluster.shutdownInternally(MiniCluster.java:364)
at 
org.apache.flink.runtime.minicluster.MiniCluster.shutdown(MiniCluster.java:309)
- locked <0xaad1d2d8> (a java.lang.Object)
at 
org.apache.flink.runtime.minicluster.MiniClusterITCase.runJobWithMultipleJobManagers(MiniClusterITCase.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 

[jira] [Commented] (FLINK-7279) MiniCluster can deadlock at shut down

2017-07-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16104746#comment-16104746
 ] 

ASF GitHub Bot commented on FLINK-7279:
---

GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/4416

[FLINK-7279][minicluster] fix a deadlock between TM and cluster shutdown

## What is the purpose of the change

The `MiniCluster` can deadlock if the fatal error handler is called while 
the `MiniCluster` shuts down. The reason is that the shut down happens under a 
lock which is required by the fatal error handler as well. If now the 
`MiniCluster` tries to shut down the underlying RPC service which waits for all 
actors to terminate, it will never complete because one actor is still waiting 
for the lock.

## Brief change log

  - guard both shutdown methods by a new `ReentrantLock` and ignore the TM 
shutdown in the `TerminatingFatalErrorHandler` if the cluster is already shut 
down.

## Verifying this change

This change is already covered by existing tests, such as 
`MiniClusterITCase` which was instable because of this bug (also see 
[FLINK-7115]).

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (JavaDocs)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-7279

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4416.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4416


commit 7815deb974316d58da0646bd26ce718bbd597ba7
Author: Nico Kruber 
Date:   2017-07-28T09:45:48Z

[FLINK-7279][minicluster] fix a deadlock between TM and cluster shutdown

The MiniCluster can deadlock if the fatal error handler is called while the
MiniCluster shuts down. The reason is that the shut down happens under a 
lock
which is required by the fatal error handler as well. If now the MiniCluster
tries to shut down the underlying RPC service which waits for all actors to
terminate, it will never complete because one actor is still waiting for the
lock.

Solution: guard both shutdown methods by a new ReentrantLock and ignore the 
TM
shutdown in the TerminatingFatalErrorHandler if the cluster is already shut
down.




> MiniCluster can deadlock at shut down
> -
>
> Key: FLINK-7279
> URL: https://issues.apache.org/jira/browse/FLINK-7279
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>  Labels: flip-6
>
> The {{MiniCluster}} can deadlock in case if the fatal error handler is called 
> while the {{MiniCluster}} shuts down. The reason is that the shut down 
> happens under a lock which is required by the fatal error handler as well. If 
> now the {{MiniCluster}} tries to shut down the underlying RPC service which 
> waits for all actors to terminate, it will never complete because one actor 
> is still waiting for the lock.
> One solution would be to ignore the fatal error handler calls if the 
> {{MiniCluster}} is shutting down.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/257811319/log.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)