-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Miklos Szigetvari
Sent: Tuesday, February 19, 2008 9:30 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Price of CPU seconds

Hi

    For me this more or less clear.
I have here a number of collegues from NT and Unix , and they don't
understand why the 0.5% CPU time is a matter:

/"Would somebody knowledgeable please explain to me why some host people
get their panties in a knot (I love colorful expressions!) over a few
dozen MBs and a CPU usage of 0.5%? Are there real reasons for this, or
are they simply stuck in a 1960s mindset? How much can 408 CPU-seconds
per day cost?"
/
<SNIP>

The problem is, where mainframes are used in a charge-back mode, the
users of a machine get charged for the amount of service(s) they have
used. Those services can be in memory units, CPU units, I/O units, etc.
All these things are done to also help determine capacity needs and
tuning needs (side effect of charge-back accounting in my opinion).

The NT/*nix boxes generally have not had the ability to do such
charge-back (or have chosen to not implement it if it is available). If
they were to start doing charge-back, you would see great howls of pain
with developers being forced to be more efficient in their use of memory
and processor cycles.

The mind-set of your friends on these platforms is typical because:
Re-boot of a PC is not a big problem because it affects one person for
3-5 minutes -- big deal. But, take their software and put it in a
multi-user environment (a server) and now it has a problem. Re-boot the
email server in the middle of the day and how many people are affected?
Re-boot the data base server for a company that is using many people
with accounting applications and how many people are affected and for
what period?

This is why mainframes (regardless of who makes them, UNISYS, Honeywell,
etc.) have the reliability they do: the developers' mind set is one of
conserving resources and playing nicely in the sand box. Companies
generally discard vendors who produce bad code. But on the other
platforms management and users are conditioned to accept outages during
production periods.

So, at $800/hr (a number that I use for example), your 408 seconds *
($0.22/sec) costs $90.67. Now if they are a system task that is overhead
that gets charged to all users (in other words, their costs are part of
that $800), the less efficient they are, the more they cost all the
users.

I think you can see where and why the thinking is so different between
the groups.

So the 0.5% of CPU to them is no big deal. But that small usage
eventually accumulated 408 seconds of CPU time. Doing charge-back for
system usage (which is also a way to justify the cost of a
machine/system), would your friends on the other systems think that CPU
usage (or wastage) is justifiable? Would they be willing to pay $90.67 a
day from their budget for it?

If their systems did that kind of charge-back accounting, when would
they decide that they needed to get their knickers in a knot?

Regards,
Steve Thompson

-- All opinions expressed by me are my own and may not necessarily
reflect those of my employer. --

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to