You have to do the work then. I have a script that calculates queries per second by getting the count of queries from this sampling interval subtracting it from a previous interval then dividing it by number of seconds passed and voila you have queries/sec. You then use gmetric to send that to Ganglia. This is an implementation for disk stat metrics.
https://github.com/ganglia/gmetric/blob/master/disk/diskio.pl/ganglia_disk_stats.pl Vladimir On 1 Apr 2011 14:52:15 -0000, "Indranil Chakravorty" <indran...@rediff-inc.com> wrote: > Thanks Ron. But is it not possible to aggregate the metrics over say 13 > seconds or a minute and then divide it to get the metrics per second by > using C plug-ins. If this is possible, then would you be able to give me > any more pointers in that direction? Thanks a ton already. > > Thanks, > Neel > > > On Fri, 01 Apr 2011 19:41:26 +0530 TheRonbo <ron.ree...@gmail.com> > wrote > > >The short answer is to author either Python or C plug-ins. > > > > > >RTFM > > > > > >However, Ganglia is really only useful for metrics measure no finer > than 13 seconds. > > > > > >13 seconds? Because that's the granularity of the RRD files that > accumulate your data. > > > > > >If you really want to try this, you'll first have to update the RRD > data files to have more 'buckets' to accumulate on 1 second intervals. > > > > > >Sounds like Ganglia may not be the right tool for what you are > attempting. > > > > > >Ron Reeder > > > > > > ------------------------------------------------------------------------------ Create and publish websites with WebMatrix Use the most popular FREE web apps or write code yourself; WebMatrix provides all the features you need to develop and publish your website. http://p.sf.net/sfu/ms-webmatrix-sf _______________________________________________ Ganglia-general mailing list Ganglia-general@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/ganglia-general