I will assume that you are running in yarn-cluster mode. Because the driver
is launched in one of the containers, it doesn't make sense to expose port
4040 for the node that contains the container. (Imagine if multiple driver
containers are launched on the same node. This will cause a port
Hi Andrew,
Thanks for the quick reply. It works with the yarn-client mode.
One question about the yarn-cluster mode: actually I was checking the AM
for the log, since the spark driver is running in the AM, the UI should
also work, right? But that is not true in my case.
Best,
Fang, Yan
@Yan, the UI should still work. As long as you look into the container that
launches the driver, you will find the SparkUI address and port. Note that
in yarn-cluster mode the Spark driver doesn't actually run in the
Application Manager; just like the executors, it runs in a container that
is
Thank you, Andrew. That makes sense for me now. I was confused by In
yarn-cluster mode, the Spark driver runs inside an application master
process which is managed by YARN on the cluster in
http://spark.apache.org/docs/latest/running-on-yarn.html . After
you explanation, it's clear now. Thank you.
@Andrew
Yes, the link point to the same redirected
http://localhost/proxy/application_1404443455764_0010/
I suspect something todo with the cluster setup. I will let you know
once I found something.
Chester
On Mon, Jul 7, 2014 at 1:07 PM, Andrew Or and...@databricks.com wrote: