So, it looks like the Akka's remote call from your master node, where the
CoarseGrainedExecutorBackend is running, to one or more slave nodes is
timing out.

By default, the port is 2552. Do you have a firewall between the nodes in
your cluster that might be blocking this port. If you're not sure, try
logging into your master and running the command "telnet slave_addr 2552"
where "slave_addr" is one of the slave's IP addresses or a routable host
name.

dean

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Sun, May 3, 2015 at 4:25 AM, podioss <grega...@hotmail.com> wrote:

> Hi,
> i am running several jobs in standalone mode and i notice this error in the
> log files in some of my nodes at the start of my jobs:
>
> INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for
> [TERM, HUP, INT]
> INFO spark.SecurityManager: Changing view acls to: root
> INFO spark.SecurityManager: Changing modify acls to: root
> INFO spark.SecurityManager: SecurityManager: authentication disabled; ui
> acls disabled; users with view permissions: NFO slf4j.Slf4jLogger:
> Slf4jLogger started
> INFO Remoting: Starting remoting
> ERROR security.UserGroupInformation: PriviledgedActionException as:root
> cause:java.util.concurrent.TimeoutException: Futures timed out after [10000
> milliseconds]
> Exception in thread "main" java.lang.reflect.UndeclaredThrowableException:
> Unknown exception in doAs
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1134)
>         at
>
> org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
>         at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:115)
>         at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:163)
>         at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
> Caused by: java.security.PrivilegedActionException:
> java.util.concurrent.TimeoutException: Futures timed out after [10000
> milliseconds]
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         ... 4 more
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after
> [10000 milliseconds]
>         at
> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>         at
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>         at
> scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>         at
>
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>         at scala.concurrent.Await$.result(package.scala:107)
>         at akka.remote.Remoting.start(Remoting.scala:180)
>         at
> akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
>         at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:618)
>         at
> akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:615)
>         at akka.actor.ActorSystemImpl._start(ActorSystem.scala:615)
>         at akka.actor.ActorSystemImpl.start(ActorSystem.scala:632)
>         at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
>         at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
>         at
>
> org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
>         at
> org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:54)
>         at
> org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
>         at
>
> org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1676)
>         at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>         at
> org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1667)
>         at
> org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:56)
>         at
>
> org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:122)
>         at
>
> org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
>         at
>
> org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
>         ... 7 more
> INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote
> daemon.
>
> These errors result in executor losses at the beginning and i have been
> trying to find a way to solve this with no success, so if anyone has a clue
> please let me know.
>
> Thank you
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/PriviledgedActionException-Executor-error-tp22745.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to