I'm using graphite/grafana to collect and visualize metrics from my spark
jobs.

It appears that not all executors report all the metrics -- for example,
even jvm heap data is missing from some. Is there an obvious reason why
this happens? Are metrics somehow held back? Often, an executor's metrics
will show up with a delay, but since they are aggregate metrics (e.g.
number of completed tasks), it is clear that they are being collected from
the beginning (the level once it appears matches other executors) but for
some reason just don't show up initially.

Any experience with this? How can it be fixed? Right now it's rendering
many metrics useless since I want to have a complete view into the
application and I'm only seeing a few executors at a time.

Thanks,

rok




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark-metrics-in-graphite-missing-for-some-executors-tp25688.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to