Hi,
In the spark job, it exports to prometheus localhost http server, to be
later scraped by prometheus service.
(https://github.com/prometheus/client_java#http) The problem here is when
ssh to the emr instances themselves, only can see the metrics on (e.g. curl
localhost:9111) driver in local mode
I am confused with your question. Are you running a the Spark cluster
on AWS EMR and trying to output the result to a Prometheus instance
running on your localhost? Isn't your localhost behind the firewall
and not accessible by AWS? What does it mean "have prometheus available
in executors"
What should i do to expose my own custom prometheus metrics for cluster
mode spark streaming job?
I want to run a spark streaming job to read from kafka , do some
calculations and write to localhost prometheus on port 9111.
https://github.com/jaegertracing/jaeger-analytics-java/blob/master/spark/s