Leaving the deployment running for a while after the 3rd restart of the 
target - 6 rounds of WAL truncation, the memory goes up to 3.7Gi comparing 
to 2.5Gi before doing the restart. There must be something that Prometheus 
holds back for this upgrade/restart scenario I guess.

[image: 
ask_prometheus_user_memory_increase_after_target_restart_upgrade_v2.jpg]

On Tuesday, October 31, 2023 at 10:07:24 PM UTC+7 Vu Nguyen wrote:

> We have Prometheus v2.47.1 deployed on k8s; scraping 500k time series from 
> a single target (*)
>
> When restart the target, the number of time series in HEAD block jump to 
> 1M (1), and Prometheus memory increases from the average of 2.5Gi to 3Gi. 
> Leave Prometheus running for few WAL truncation cycles, the memory still 
> not go back to the point before restarting the target even the number of 
> time series in HEAD block back to 500K.
>
> If I trigger another target restarts, that memory keeps going up. Here is 
> the graph:
>
> Could you please help us understand why the memory does not fallback to 
> the initial point (*) before we restart/upgrade target?
>
> [1] k8s pod restart will come up with a new IP - new instance label value; 
> therefore, a new set of of 500K time series is generated.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/eca095a3-ac30-42f4-b666-13f6a520f59bn%40googlegroups.com.

Reply via email to