[rrd-users] Re: Data granularity

2005-01-21 Thread Tolga Yalcinkaya
Comments below... Serge Maandag wrote: No, rrdtool normalizes your input. Therefore it really only is useful if you enter data in a regular interval. Say you updated like this: rrdtool update resp.rrd 1106249073:15 rrdtool update resp.rrd 1106249083:45 Then after the second update,

[rrd-users] Re: Data granularity

2005-01-21 Thread Tolga Yalcinkaya
Erik, Thanks for the response. I too have a Perl script (called Apache Log Sweeper) that does something very similar... It sweeps the Apache log files and gathers performance stats for a given URI. However, your code (like my Apache Log Sweeper) does multiple updates to RRD in one pass,

[rrd-users] Re: Data granularity

2005-01-20 Thread Serge Maandag
I have a monitor scripts that gets invoked every time a user interacts with a Web service. There are several Web services that we are collecting data from. Each one goes to a different RRD. Some services are very busy (several hits per second) and some are not (a few hits per week).

[rrd-users] Re: Data granularity

2005-01-20 Thread Serge Maandag
rrdtool create resp.rrd --step 5 \ DS:resp:GAUGE:10:0:U \ RRA:AVERAGE:0.999:1:1000 If I do single data point update using: rrdtool update resp.rrd 1106249083:45 and *not* post any update for a while ( 10 seconds ), the RRD fetch command shows that the AVERAGE

[rrd-users] Re: Data granularity

2005-01-20 Thread Erik de Mare
Serge Maandag wrote: snip I still would use the log file solution. It's way cleaner. I took mailgraph as an example for my scripts, and I build this with it: http://haas.oezie.org/rrd/httpd/ (and the script here http://haas.oezie.org/rrd/httpd.pl) it works like a tail -f logfile, and