Hi Matt,

there is some related work I recently did in IBM Research for visualizing
the metrics produced.
You can read about it here
http://www.spark.tc/sparkoscope-enabling-spark-optimization-through-cross-stack-monitoring-and-visualization-2/
We recently opensourced it if you are interested to have a deeper look to
it: https://github.com/ibm-research-ireland/sparkoscope

Thanks,
Yiannis

On 3 February 2016 at 13:32, Matt K <matvey1...@gmail.com> wrote:

> Hi guys,
>
> I'm looking to create a custom sync based on Spark's Metrics System:
>
> https://github.com/apache/spark/blob/9f603fce78fcc997926e9a72dec44d48cbc396fc/core/src/main/scala/org/apache/spark/metrics/MetricsSystem.scala
>
> If I want to collect metrics from the Driver, Master, and Executor nodes,
> should the jar with the custom class be installed on Driver, Master, and
> Executor nodes?
>
> Also, on Executor nodes, does the MetricsSystem run inside the Executor's
> JVM?
>
> Thanks,
> -Matt
>

Reply via email to