In that situation of very slow scrapes, I'd suggest two options.

Either split the queries up if possible, which would allow things to be 
parallelised.

Alternatively, run the queries via cron or a script and write the results to 
the filesystem and then use the node exporter's textfile collector. Then you 
can scrape say ever 30 seconds so you won't have issues with staleness. 

On 27 November 2020 18:35:00 GMT, Josefo <josefo.2...@gmail.com> wrote:
>Hi Stuart, thanks for your time!
>The problem I have is that with script_exporter I make database queries
>that normally take between 5 and 8 minutes and can reach 15 On the
>other
>hand I am saving that information in a database and I don't really need
>that much data
>
>El jue, 26 nov 2020 a las 15:31, Stuart Clark
>(<stuart.cl...@jahingo.com>)
>escribió:
>
>> On 26/11/2020 17:18, Josefo Serra wrote:
>> > Hi, I'm losing metrics between scraps and I understand that it's
>> > because I scrape every 10 minutes and they must "expire" or
>something.
>> > Could someone tell me which parameter I have to modify?
>> > Thanks!
>> >
>> The maximum scrape interval should be 2 minutes, so you want to
>adjust
>> those jobs to scrape every 2 minutes instead of 10.
>>
>>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/C1269FCE-7EC9-442A-875B-D567D2DDEC10%40Jahingo.com.

Reply via email to