Hi Mich,
That's correct -- they're indeed duplicates in the table but not on
OS. The reason for this *might* be that you need to have separate
stdout and stderr for the failed execution(s). I'm using
--num-executors 2 and there are two executor backends.
$ jps -l
28865 sun.tools.jps.Jps
802
Can you please run jps on 1-node host and send the output. All those
executor IDs some are just duplicates!
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Hi,
Thanks Mich and Akhil for such prompt responses! Here's the screenshot
[1] which is a part of
https://issues.apache.org/jira/browse/SPARK-16047 I reported today (to
have the executors sorted by status and id).
[1]
A screenshot of the executor tab will explain it better. Usually executors
are allocated when the job is started, if you have a multi-node cluster
then you'll see executors launched on different nodes.
On Sat, Jun 18, 2016 at 9:04 PM, Jacek Laskowski wrote:
> Hi,
>
> This is
Hi Jacek,
Can you take a snapshot of your GUI /executors and GUI /Environment.
On a single node cluster The executor ID is the driver?
But we can find out all from the Environment snapshot (snipping tool)
HTH
Dr Mich Talebzadeh
LinkedIn *
Hi,
This is for Spark on YARN - a 1-node cluster with Spark 2.0.0-SNAPSHOT
(today build)
I can understand that when a stage fails a new executor entry shows up
in web UI under Executors tab (that corresponds to a stage attempt). I
understand that this is to keep the stdout and stderr logs for