Great! Thanks for the explanation. The math now makes perfect sense. If I could trouble you for one more thing. Is there a service script for this release? I've found a couple, but they don't do it well.
This one starts and stops, but the 'status' doesn't work and the restart/reload do the same thing. #!/bin/bash## chkconfig: 2345 80 05# Description: Smokeping init.d script# Hacked by : How2CentOS - http://www.how2centos.com # Get function from functions library . /etc/init.d/functions # Start the service Smokeping start() { echo -n "Starting Smokeping: " /opt/smokeping/bin/smokeping >/dev/null 2>&1 ### Create the lock file ### touch /var/lock/subsys/smokeping success $"Smokeping startup" echo} # Restart the service Smokeping stop() { echo -n "Stopping Smokeping: " kill -9 `ps ax | grep "/opt/smokeping/bin/smokeping" | grep -v grep | awk '{ print $1 }'` >/dev/null 2>&1 && killall speedy_backend ### Now, delete the lock file ### rm -f /var/lock/subsys/smokeping success $"Smokeping shutdown" echo} ### main logic ###case "$1" in start) start ;; stop) stop ;; status) status Smokeping ;; restart|reload|condrestart) stop start ;; *) echo $"Usage: $0 {start|stop|restart|reload|status}" exit 1esac exit 0 Is there a start/stop script that comes with the smokeping package that works with CentOS 6.2? Thanks! -- Matt On Mon, Apr 9, 2012 at 11:20 PM, Gregory Sloop <[email protected]> wrote: > I'm using the database section below as my starting point. Lets > reproduce it here: > --- > > *** Database *** > step = 30 > pings = 10 > > # consfn mrhb steps total > AVERAGE 0.5 1 1008 > AVERAGE 0.5 12 4320 > MIN 0.5 12 4320 > MAX 0.5 12 4320 > AVERAGE 0.5 144 720 > MAX 0.5 144 720 > MIN 0.5 144 720 > > --- > So line 1: The "Total" line should be how many full resolution samples you > want to keep. > [i.e. 2880 is 24 hours of full res data (2/min * 60mins * 24 hours)] > 1008 would be 504 minutes of data, or just over 8 hours. [1008 samples, > divided by 2 (samples per minute) divided by 60 = 8.4 hours] > > The next three lines are the second tier data. These will have x number of > steps (or average/min/max) compressed to one. So, if you leave the "steps" > to 12, it would then be a 6 minute average [30 secs per sample, 12:1 ratio > = 1 sample every 6 minutes.] (6 minute data) > > To keep six months of six minute data: total col = 43200 [10 samples per > hour * 24 hours * 30 days * 6 months = 43200] > > The last three are even lower res data. It will compress 144 full res > steps into 1. [i.e. 72 minute data. You can keep as much as you'd like > here, just keep as many minutes as you want history. 10000 in the total > column would be 720000 minutes or 500 days worth.] > > (But you don't have to use 144 as the step value - perhaps you want your > third tier data to be hour data, choose accordingly.) > > HTH > > -Greg > > > More info... > > Based on earlier calculations, I come up with > > 86400 (sec/day) X 180 days / 30 (step value) = 518400, but I'm not sure > where to plug in this value. > > As for # of targets, right now it's around 50 in each location, so I'm not > too worried about space at the moment. > > Thanks! > > -- Matt > > > On Mon, Apr 9, 2012 at 9:37 PM, Matt Almgren <[email protected]> wrote: > Hey guys, finally getting around to poking around with this... > > Here's my database section: > > *** Database *** > > step = 30 > pings = 10 > > # consfn mrhb steps total > > AVERAGE 0.5 1 1008 > AVERAGE 0.5 12 4320 > MIN 0.5 12 4320 > MAX 0.5 12 4320 > AVERAGE 0.5 144 720 > MAX 0.5 144 720 > MIN 0.5 144 720 > > > I'm not too interested in seeing more than the default value of detailed > information. What I am interested in is seeing up to 6 months of > non-detailed data just to get trending information. Still a bit confused > on the above values. Care to give me some numbers to punch in to a) keep > the default detailed samples, but b) keep up to 6 months (non-detailed) > archival data? > > Thanks! > > -- Matt > > > > On Mon, Mar 5, 2012 at 8:57 AM, Gregory Sloop <[email protected]> wrote: > GS> These aren't files - it's more that if you have hundreds or thousands > GS> of devices you're sampling, there's a lot of them. > > Sorry Peter, I accidentally replied direct to you, as well as the > list... > > Also, a typo above. > > These aren't *huge* files - it's more that if you have hundreds or > thousands > of devices you're sampling, there's a lot of them. > > And I'll clarify about I/O - with that much disk activity, - writing > to thousands of files very often, your disk may not keep up. But you > would have enough space to store everything, if it could... > > HTH > > -Greg > > > _______________________________________________ > smokeping-users mailing list > [email protected] > https://lists.oetiker.ch/cgi-bin/listinfo/smokeping-users > > *-- > Gregory Sloop, Principal: Sloop Network & Computer Consulting > Voice: 503.251.0452 x82 > EMail: *[email protected] > http://www.sloop.net > *---* >
_______________________________________________ smokeping-users mailing list [email protected] https://lists.oetiker.ch/cgi-bin/listinfo/smokeping-users
