The driver has the data and wouldn't need to rerun.
On Friday, April 8, 2016, Sung Hwan Chung wrote:
> Hello,
>
> Say, that I'm doing a simple rdd.map followed by collect. Say, also, that
> one of the executors finish all of its tasks, but there are still other
> executors running.
>
> If the ma
Hello,
Say, that I'm doing a simple rdd.map followed by collect. Say, also, that
one of the executors finish all of its tasks, but there are still other
executors running.
If the machine that hosted the finished executor gets terminated, does the
master still have the results from the finished ta
we had a couple of hanging/stuck builds that were filling up /home.
some of the procs didn't like the SIGKILL, so i just rebooted them.
/home on both of these boxes is back down to ~33% usage.
anyways, these two nodes are back up and building. if i find anymore
stuck builds, i'll open a spark tic
looks like something filled up /home (0% space left), and i'll need to
figure out what that is as well as clean up some space.
once we're good, i'll put them back online and let everyone know.
-
To unsubscribe, e-mail: dev-unsubs
Hello!
TL;DR Could you explain how (and which) Kerberos tokens should be
delegated from driver to workers? Does it depend on spark mode?
I have a Hadoop cluster HDP 2.3 with Kerberos. I use spark-sql (1.6.1
compiled with hadoop 2.7.1 and hive 1.2.1) on yarn-cluster mode to
query my hive tables.
1