A great idea but I do see a potential problem in that there would be a lot
of data to send to graphite as many systems have on the order of 500-1000
processes.  On the other hand if you choose to filter with --procfilt you
can significantly reduce the number,

But back to graphite, I think I'd have to agree with your format above,
sending one line pro process metric, but there are a lot of metrics - maybe
on the order of 20 or so.  So how do you choose which subset to send?
 Maybe part of the answer is to hack something up in graphite.py to do what
you want and maybe ask around some graphite forums to see if others might
find this useful.  Do you speak perl?  ;)

As for what process data is available to you and how to deal with it, if
you look in formatit.ph for the section that processes process data,
 you'll see a loop that looks at all the process data and then uses that
index into an array to print what it wants to.  You could probably lift a
chunk of that code for graphite.ph to use.  I think that's how I'd do it.

-mark


On Thu, Nov 7, 2013 at 8:30 PM, Adrián López Tejedor <[email protected]>wrote:

> Hi!
>
> I have been looking for a while for some way to monitor all the processes
> of a server, to be able to say in a particular moment which one is
> consuming the CPU, memory, etc.
>
> Until now, the only solution I have found is pydstat (
> https://github.com/SplunkStorm/pydstat) + splunk.
> When is executed, pydstat send to syslog one line per process, with cpu,
> mem and io info.
> Then, splunk read that logfile and generate timecharts (example
> search: source="pydstat.log" | timechart span=4m avg(pct_CPU) by Command).
>
> This works great, but splunk is quite expensive, and a little "using a
> hammer to crack a nut".
> Logstash seems that could works in the same way, but I haven't proved it
> yet, I have to finish the grok expr to parse the data, and don't know if it
> will generate graphs as good as splunk.
>
>
> Looking for other options I think in Graphite.
> The first idea was pydstat + logster + graphite. I think it could work,
> but I thought that must be an easier way (at least with less components).
>
> Looking in the graphite tools section I see collectl. Monitoring processes
> and connection to graphite, perfect!
> So my first try was:
> collectl --export graphite,192.168.1.113,d=9 -sZ -i 2:2
> But nothing happens.
>
> Looking at the graphite code I have seen that there is no section for
> processes subsystem.
> I was thinking in how hard could be to write that part of code.
>
> My first idea is send data like:
> process.<CMD>.<PID>.cpu
> process.<CMD>.<PID>.sys
> ...
> process.<CMD>.cpu
> process.<CMD>.sys
> ...
>
> The idea is to have the data for each pid, but also aggregated for
> command, because I think is more interesting knowing the consumption of
> apache as one program.
>
> Some problems that I see:
>  -what happen when the process is runned by an interpreter (python blabla,
> python bleble, aggregated together?? no!)
>  -threads?
>  -aggregate info in collectl or in graphite (aggregation-rules.conf) ?
>
>
> Opinions? :)
>
> Regards!
> Adrián
>
>
> ------------------------------------------------------------------------------
> November Webinars for C, C++, Fortran Developers
> Accelerate application performance with scalable programming models.
> Explore
> techniques for threading, error checking, porting, and tuning. Get the most
> from the latest Intel processors and coprocessors. See abstracts and
> register
> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
> _______________________________________________
> Collectl-interest mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/collectl-interest
>
>
------------------------------------------------------------------------------
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
_______________________________________________
Collectl-interest mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/collectl-interest

Reply via email to