I have some graphs that go unreadable because spike loads are throwing off the scaling.
A daily cron job causes the graphs to spike to ~10 million for a brief time every day, which causes the graphs to scale to that. Unfortunately, the data I'm _really_ interested in seeing is between 1000 and 100000, which gets scaled to the bottom of the graph in such a way that it can't really be read (it's just a flat line with a few little bumps. I've switched to log scaling, but it's not enough in these cases. I wouldn't mind losing the details off the top of the graph to get the graphs to scale to where I can see the other data. I already know the cron jobs max out the resource, I really need to see what the activity in between the cron jobs is doing. Will MaxBytes accomplish this by completely ignoring the high values? I tried it, but it simply draws a red line across the graph at the Maxbytes value, Will this scale out correctly over time? Any advice is appreciated. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 _______________________________________________ mrtg mailing list [email protected] https://lists.oetiker.ch/cgi-bin/listinfo/mrtg
