Is there anyway to make the spark process visible via Spark UI when running Spark 3.0 on a Hadoop yarn cluster?  The spark documentation talked about replacing Spark UI with the spark history server, but didn't give much details.  Therefore I would assume it is still possible to use Spark UI when running spark on a hadoop yarn cluster.  Is this correct?   Does the spark history server have the same user functions as the Spark UI?

But how could this be possible (the possibility of using Spark UI) if the Spark master server isn't active when all the job scheduling and resource allocation tasks are replaced by yarn servers?

Thanks!

-- ND


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to