Only now seen in the docs that I am supposed to start any discussions here 
first before opening an issue, sorry about that! :) 

Currently there is no way of a target to have higher scrape priority over 
another, but if you have a setup and even if you set target limits and 
sample limits you can still overestimate your setup, you still want to have 
a higher priority targets that are preferred over the entire Prometheus to 
fail. It would need to be based on the inability to ingest into tsdb on the 
current rate we are scrapping, if that is hit the priority class would take 
affect and only the highest priority targets would be scrapped in favour of 
lower priority. Another option which might be simpler would be to have a 
global limit on how much prometheus can handle based on perf testing.

This would be treated as a last resort, and there would definitely be a 
need for a high severity alert to inform the admin that something went 
terribly wrong, but because we would still be able to ingest Prometheus 
metrics for example if they are higher priority class alerting would be 
possible. 

We could model this on something like PriorityClass 
<https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass>
 from 
Kubernetes, but I am open to other suggestions.

I am open to other suggestions, or maybe there is something like this but I 
missed it. The main purpose is to ensure there are protection mechanisms in 
place, so any ideas and suggestions welcome! 

Thanks and kind regards,
Lili

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/30df615e-5420-4bdf-9cb7-2790ef19d520o%40googlegroups.com.

Reply via email to