dongjoon-hyun edited a comment on issue #26060: [SPARK-29400][CORE] Improve 
PrometheusResource to use labels
URL: https://github.com/apache/spark/pull/26060#issuecomment-539807777
 
 
   Hi, @yuecong . Thank you for review.
   1. That's true in the old Prometheus plugin. So, Apache Spark 3.0.0 exposes 
this Prometheus metric on the driver port, instead of the executor port. I mean 
you are referring `executor` instead of `driver`. Do you have a short-live 
Spark driver which dies in `30s`?
   >  As Prometheus uses pull model, how do you recommend people to use these 
metrics for some executors who get shut down immediately? Also how this will 
work for some short-lived(e.g. shorter than one Prometheus scrape interval, 
usually it is 30s) spark application?
   
   2. Please see this PR's description. The metric name is **unique** with 
cadinality 1 by using labels, 
`metrics_executor_rddBlocks_Count{application_id="app-20191008151625-0000"`
   > It looks like you are using app_id as one of the app_id, which will 
increase the cardinality for Prometheus metrics. 
   
   I don't think you mean `Prometheus Dimention feature` is high-cardinality.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to