What should i do to expose my own custom prometheus metrics for cluster
mode spark streaming job?

I want to run a spark streaming job to read from kafka , do some
calculations and write to localhost prometheus on port 9111.
https://github.com/jaegertracing/jaeger-analytics-java/blob/master/spark/src/main/java/io/jaegertracing/analytics/spark/SparkRunner.java#L47
is it possible to have the prometheus available in executors? I tried both
emr cluster as well as k8s, only local mode works (the metrics are
available on driver's 9111 only)
Looks like the prometheus servlet sink is my best option? Any advice would
be much appreciated!!

Thanks,
Christine

Reply via email to