On Sun, Aug 2, 2009 at 1:26 AM, Scott Rohling<scott.rohl...@gmail.com> wrote:

> Has anyone played around with using the VM accounting data, along with Linux
> usage data (sar data for example - capturing process usage) to come up with
> a way to assign usage as the VM level (i.e. host CPU hours) to individual
> processes?

That does not work in general with sar data because in most situations
the average life time of a process is much shorter. Also, for most
shops memory usage today is the main issue, rather than CPU usage. And
that's not in accounting data either.

You're probably aware that we do this with monitor data. ESALPS takes
both process data from Linux and z/VM monitor data to provide data
that is good enough to be used for process level accounting on
process, application or user level.
We collect Linux data via SNMP but have our own agent to get a high
(near 100% capture ratio). With the normal Linux SNMP data you often
don't get more than 10% of your usage reported (that means that you
may be a factor 10 off in your charges)

> I'm thinking of a grid environment, where you would want to assign usage to
> accounts -- not at the server level - but at the process level.  Given a set
> CPU hour rate at the VM level - you could (hopefully) accurately determine
> the real cost of individual Linux processes.   Maybe cut C0 z/VM accounting
> records daily from Linux (using cpint) to feed the data to the VM accounting
> file.

There is an interface to Linux to put per-process data into the
accounting data, but this is incomplete and not useful for what you
want.

Even when you charge people by virtual machine usage, you still want
to have complete usage data on process level. Most customers don't
accept a bill with just the totals on it, they want you to break it
down to a level where they recognize the usage.That way they can tell
that the charges were higher on tuesday because db2 was looping, or
things like that. If your detail only covers 10% or so, it does not
help to convince the customer that your data is correct.

> I'm not sure it's even possible.. but perhaps through some statistical
> formula (overall cost of CPU for VM guest is x - process y used 10% of it)
> you can get close?

Have a look at this presentation: http://www.rvdheij.nl/Presentations/zLX44.pdf
I will present (part of it, unfortunately) at SHARE in Denver (S9217, Mon 11:00)

Rob
-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to