Re: Energy consumption vs % utilization?

2004-10-27 Thread Andre Oppermann
Nik Hug wrote:
From: "Andre Oppermann"
From running a Colo in a place with ridiculus high electricity engery
costs (Zurich/Switzerland) I can tell you that the energy consuption
of routers/telco (70%) and servers (30%) changes changes significantly
throughout the day.  It pretty much follows the traffic graph.  There
is a solid base load just because the stuff is powered up and from there
it goes up as much as 20-30% depending on the routing/computing load of
the boxes.  To simplify things you can say that per packet you have that
many "mWh" (milli-Watt-hours) per packet switched/routed or http requests
answered over the base load.  I haven't tried to calulate how much energy
routing a packet on a Cisco 12k or Juniper M40 cost though.  Would be
very interesting if someone (student) could do that calculation.
the same variation between night and day here - but from our point of view
the consumption of the air-pack's are making the differences during the day
...  traffic-graph and outside temperature-graphs show more or less the same
up and down. Would we interesting to have separate values for the power
consumption between server-equipment and air-co ...
In this case the air-co is not included.  That is measured on a separate
circuit for which I don't have any figures ready.
Also note that especially high-end routers draw power load dependent.  With
SONET/SDH stuff I haven't seen it.  The reason is circuit switching.  They
switch continuously the same amount of data.
--
Andre


Re: Energy consumption vs % utilization?

2004-10-27 Thread Nik Hug

From: "Andre Oppermann"
>  From running a Colo in a place with ridiculus high electricity engery
> costs (Zurich/Switzerland) I can tell you that the energy consuption
> of routers/telco (70%) and servers (30%) changes changes significantly
> throughout the day.  It pretty much follows the traffic graph.  There
> is a solid base load just because the stuff is powered up and from there
> it goes up as much as 20-30% depending on the routing/computing load of
> the boxes.  To simplify things you can say that per packet you have that
> many "mWh" (milli-Watt-hours) per packet switched/routed or http requests
> answered over the base load.  I haven't tried to calulate how much energy
> routing a packet on a Cisco 12k or Juniper M40 cost though.  Would be
> very interesting if someone (student) could do that calculation.

the same variation between night and day here - but from our point of view
the consumption of the air-pack's are making the differences during the day
...  traffic-graph and outside temperature-graphs show more or less the same
up and down. Would we interesting to have separate values for the power
consumption between server-equipment and air-co ...

Greetings

Nik



Re: Energy consumption vs % utilization?

2004-10-27 Thread Andre Oppermann
Steven M. Bellovin wrote:
In message <[EMAIL PROTECTED]>, Alex Rubenst
ein writes:
Hello,
I've done quite a bit of studyin power usage and such in datacenters over 
the last year or so.

I'm looking for information on energy consumption vs percent utilization. In

other words if your datacenter consumes 720 MWh per month, yet on average 
your servers are 98% underutilized, you are wasting a lot of energy (a hot 
topic these days). Does anyone here have any real data on this?
I've never done a study on power used vs. CPU utilization, but my guess is 
that the heat generated from a PC remains fairly constant -- in the grand 
scheme of things -- no matter what your utilization is.
I doubt that very much, or we wouldn't have variable speed fans.  I've 
monitored CPU temperature when doing compilations; it goes up 
significantly.  That suggests that the CPU is drawing more power at 
such times.
From running a Colo in a place with ridiculus high electricity engery
costs (Zurich/Switzerland) I can tell you that the energy consuption
of routers/telco (70%) and servers (30%) changes changes significantly
throughout the day.  It pretty much follows the traffic graph.  There
is a solid base load just because the stuff is powered up and from there
it goes up as much as 20-30% depending on the routing/computing load of
the boxes.  To simplify things you can say that per packet you have that
many "mWh" (milli-Watt-hours) per packet switched/routed or http requests
answered over the base load.  I haven't tried to calulate how much energy
routing a packet on a Cisco 12k or Juniper M40 cost though.  Would be
very interesting if someone (student) could do that calculation.
--
Andre


Re: Energy consumption vs % utilization?

2004-10-26 Thread David Lesher

Speaking on Deep Background, the Press Secretary whispered:
> 

[KWH meter]

> Instead of doing all this, just buy a Kill-A-Watt meter for about $30, and
> get an instant reading of Watts, Amps, VAs, power factor, and KWH.

Interesting.

I've not seen anything near that cheap. The spec sheet is
rather lacking but if it does the job..

Note that it's 120V only and 15A; lots of racks exceed that.



-- 
A host is a host from coast to [EMAIL PROTECTED]
& no one will talk to a host that's close[v].(301) 56-LINUX
Unless the host (that isn't close).pob 1433
is busy, hung or dead20915-1433


Re: Energy consumption vs % utilization?

2004-10-26 Thread Krzysztof Adamski

On Tue, 26 Oct 2004, David Lesher wrote:

>
> Speaking on Deep Background, the Press Secretary whispered:
> >
> >
> > You should be able to pick up simple current / wattage meter from local
> > hardware store for $20 or so. That will tell you that on a modern
> > dual-CPU machine the power consumption at idle CPU is about 60% of peak.
> > The rest is consumed by drives, fans, RAM, etc. As wattage the
> > difference is 100-120W (50-60W per cpu)
>
> Bogus data alert
>
> A ammeter will tell you amps. But in the world of switcher power
> supplies, that does not beget watts. [Why? is an exercise for
> the student. Start with "power factor" and "VARS" and worry about
> asymetric loads...]
>
> If you want to talk watts, as you must to worry about HVAC,
> or really watt-hours... acquire a power company type meter - in
> glass with a whirlygig. Put it in a meter box with plugs. [See
> your local eletrical wholesaler..]
>
> (The rotating disk watthour meter is amazingly accurate under
> almost any kind of load waveform. Only time-of-day metering has
> spurred the utilities to replace them.)
>
> Plug the machines into it; it into the wall. Note the numbers
> and the time, and come back in 24Hours.

Instead of doing all this, just buy a Kill-A-Watt meter for about $30, and
get an instant reading of Watts, Amps, VAs, power factor, and KWH.

K



Re: Energy consumption vs % utilization?

2004-10-26 Thread David Lesher

Speaking on Deep Background, the Press Secretary whispered:
> 
> 
> You should be able to pick up simple current / wattage meter from local 
> hardware store for $20 or so. That will tell you that on a modern 
> dual-CPU machine the power consumption at idle CPU is about 60% of peak. 
> The rest is consumed by drives, fans, RAM, etc. As wattage the 
> difference is 100-120W (50-60W per cpu)

Bogus data alert

A ammeter will tell you amps. But in the world of switcher power
supplies, that does not beget watts. [Why? is an exercise for
the student. Start with "power factor" and "VARS" and worry about
asymetric loads...] 

If you want to talk watts, as you must to worry about HVAC,
or really watt-hours... acquire a power company type meter - in
glass with a whirlygig. Put it in a meter box with plugs. [See
your local eletrical wholesaler..]

(The rotating disk watthour meter is amazingly accurate under
almost any kind of load waveform. Only time-of-day metering has
spurred the utilities to replace them.)

Plug the machines into it; it into the wall. Note the numbers
and the time, and come back in 24Hours. 




-- 
A host is a host from coast to [EMAIL PROTECTED]
& no one will talk to a host that's close[v].(301) 56-LINUX
Unless the host (that isn't close).pob 1433
is busy, hung or dead20915-1433


Re: Energy consumption vs % utilization?

2004-10-26 Thread Alex Rubenstein


I doubt that very much, or we wouldn't have variable speed fans.  I've
monitored CPU temperature when doing compilations; it goes up
significantly.  That suggests that the CPU is drawing more power at
such times.
I don't doubt what you are saying. However, I did say, "in the grand 
scheme of things", meaning that the heat given off by the CPU, and change 
thereof, relative to the constant heat given off by the rotation of hard 
drives, the heat given off by the power supplies, etc., is still small.


Of course, there's another implication -- if the CPU isn't using the
power, the draw from the power line is less, which means that much less
electricity is being used.
An important point, but I still bet relatively small.
It's going to be a busy weekend at the Rubenstein Lab (aka, my garage) 
this weekend; I'll post results to my findings.


-- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben --
--Net Access Corporation, 800-NET-ME-36, http://www.nac.net   --



RE: Energy consumption vs % utilization?

2004-10-26 Thread Hannigan, Martin



Sure but colo's dont operate on variances of power based on CPU.
They operate on committed power to cabinet i.e. 120W per cabinet
etc. and the ability to cool a fully loaded facility.

If you had a thousand CPU's use 1W more all at the same time it's
equal to about 9.5A. 1KX2W = 20A, 1KX3W=31A, etc. etc.

--
Martin Hannigan (c) 617-388-2663
VeriSign, Inc.  (w) 703-948-7018
Network Engineer IV   Operations & Infrastructure
[EMAIL PROTECTED]



> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
> james edwards
> Sent: Tuesday, October 26, 2004 4:18 PM
> To: Alex Rubenstein
> Cc: [EMAIL PROTECTED]
> Subject: Re: Energy consumption vs % utilization?
> 
> 
> 
> > Thats an insane statement.
> >
> > Are you saying, "You are only wasting money on things if you aren't
> > profitable" ?
> >
> > /action shakes head.
> 
> No, I am not but my statement did sure sound like that was what I was
> saying.
> I do think it is apples or oranges comparing CPU % to total 
> power used and
> coming up
> with a wasted factor. My colo needs X amps/hour just to run 
> at idle, I don't
> call this waste.
> It is the cost of doing business. Power factor causes losses. 
> So you need
> enough customers to cover
> this and other expenses.
> 
> I guess we need a definition of waste here.
> 
> I would say the the heat produced by pulling all the amp/hrs 
> is waste. It
> could be possible to harvest
> this and reuse it elsewhere.
> 
> So, just because you are profitable does not mean there is no 
> waste but it
> also depends on how you
> classify waste. Also, do the methods to avoid this waste 
> justify (pay for
> over time) their use.
> 
> James H. Edwards
> Routing and Security Administrator
> At the Santa Fe Office: Internet at Cyber Mesa
> [EMAIL PROTECTED]  [EMAIL PROTECTED]
> http://www.cybermesa.com/ContactCM
> (505) 795-7101
> 


Re: Energy consumption vs % utilization?

2004-10-26 Thread Petri Helenius
Alex Rubenstein wrote:

I'm looking for information on energy consumption vs percent 
utilization. In other words if your datacenter consumes 720 MWh per 
month, yet on average your servers are 98% underutilized, you are 
wasting a lot of energy (a hot topic these days). Does anyone here 
have any real data on this?

I've never done a study on power used vs. CPU utilization, but my 
guess is that the heat generated from a PC remains fairly constant -- 
in the grand scheme of things -- no matter what your utilization is.
You should be able to pick up simple current / wattage meter from local 
hardware store for $20 or so. That will tell you that on a modern 
dual-CPU machine the power consumption at idle CPU is about 60% of peak. 
The rest is consumed by drives, fans, RAM, etc. As wattage the 
difference is 100-120W (50-60W per cpu)

All modern operating systems do moderate job of saving CPU wattage when 
they are idle (BSD's, Linux, MACOS X, WinXP, etc.)

Pete



Re: Energy consumption vs % utilization?

2004-10-26 Thread james edwards

> Thats an insane statement.
>
> Are you saying, "You are only wasting money on things if you aren't
> profitable" ?
>
> /action shakes head.

No, I am not but my statement did sure sound like that was what I was
saying.
I do think it is apples or oranges comparing CPU % to total power used and
coming up
with a wasted factor. My colo needs X amps/hour just to run at idle, I don't
call this waste.
It is the cost of doing business. Power factor causes losses. So you need
enough customers to cover
this and other expenses.

I guess we need a definition of waste here.

I would say the the heat produced by pulling all the amp/hrs is waste. It
could be possible to harvest
this and reuse it elsewhere.

So, just because you are profitable does not mean there is no waste but it
also depends on how you
classify waste. Also, do the methods to avoid this waste justify (pay for
over time) their use.

James H. Edwards
Routing and Security Administrator
At the Santa Fe Office: Internet at Cyber Mesa
[EMAIL PROTECTED]  [EMAIL PROTECTED]
http://www.cybermesa.com/ContactCM
(505) 795-7101



Re: Energy consumption vs % utilization?

2004-10-26 Thread Steven M. Bellovin

In message <[EMAIL PROTECTED]>, Alex Rubenst
ein writes:
>
>
>Hello,
>
>I've done quite a bit of studyin power usage and such in datacenters over 
>the last year or so.
>
>> I'm looking for information on energy consumption vs percent utilization. In
> 
>> other words if your datacenter consumes 720 MWh per month, yet on average 
>> your servers are 98% underutilized, you are wasting a lot of energy (a hot 
>> topic these days). Does anyone here have any real data on this?
>
>I've never done a study on power used vs. CPU utilization, but my guess is 
>that the heat generated from a PC remains fairly constant -- in the grand 
>scheme of things -- no matter what your utilization is.
>

I doubt that very much, or we wouldn't have variable speed fans.  I've 
monitored CPU temperature when doing compilations; it goes up 
significantly.  That suggests that the CPU is drawing more power at 
such times.

Of course, there's another implication -- if the CPU isn't using the 
power, the draw from the power line is less, which means that much less 
electricity is being used.

--Steve Bellovin, http://www.research.att.com/~smb




Re: Energy consumption vs % utilization?

2004-10-26 Thread Alex Rubenstein

On Tue, 26 Oct 2004, Erik Haagsman wrote:
It's more or less the truth though.
I think the comment was outside of the scope of the original discussion. 
It seemed to me that:

It is only waste is the P & L statement is showing no profit.
inferred that any business practice is OK, as long as your are profitable. 
It is that concept that I felt was insane.



-- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben --
--Net Access Corporation, 800-NET-ME-36, http://www.nac.net   --



Re: Energy consumption vs % utilization?

2004-10-26 Thread Erik Haagsman

It's more or less the truth though. Only on rare occasions, such as the
cluster/fail-over scenario given, can you actually supply less power to
certain machines, and power use largely unrelated to their actual
utilisation. Keep an eye on your UPS load during peak hours and you'll
see the load rising when traffic and server utilisation rises, but
compared to the baseline power needed to feed servers these fluctuations
are peanuts. 
You supply a server with enough power to run...how is this waste
exactly...? If anyone is wasting anything, it's perhaps hardware
manufacturers that don't design efficiently enough, but power that you
provide and that's used (and paid for) by your customers is not wasted
IMO.

Cheers,

Erik

On Tue, 2004-10-26 at 21:07, Alex Rubenstein wrote:
> Thats an insane statement.
> 
> Are you saying, "You are only wasting money on things if you aren't 
> profitable" ?
> 
> /action shakes head.
> 
> 
> 
> On Tue, 26 Oct 2004, james edwards wrote:
> 
> >
> >>
> >> Sorry, this is somewhat OT.
> >>
> >> I'm looking for information on energy consumption vs percent utilization.
> >> In other words if your datacenter consumes 720 MWh per month, yet on
> >> average your servers are 98% underutilized, you are wasting a lot of
> >> energy (a hot topic these days). Does anyone here have any real data on
> >> this?
> >>
> >> Grisha
> >
> > It is only waste is the P & L statement is showing no profit.
> >
> 
> -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben --
> --Net Access Corporation, 800-NET-ME-36, http://www.nac.net   --
-- 
---
Erik Haagsman
Network Architect
We Dare BV
tel: +31(0)10 7507008
fax:+31(0)10 7507005
http://www.we-dare.nl




Re: Energy consumption vs % utilization?

2004-10-26 Thread Jack Bates
Erik Haagsman wrote:


Which means you have to make sure the revenue generated by those 98% 
underutilized servers covers your powerbill and other expenses, 
preferrably leaving some headroom for a healthy profit margin. As
long as that's the case there's no real waste of energy, the services
 people run on their servers are supposed to be worth the energy and 
other costs, whether they physically fully utilize their power or
not.

Yet there are a lot of clusters which are designed for peak load, which 
will waste energy during non-peak hours. Developing an in-house system 
for shutting down power to excess servers in a cluster might increase 
the healthy profit margin.

-Jack


Re: Energy consumption vs % utilization?

2004-10-26 Thread Alex Rubenstein

Thats an insane statement.
Are you saying, "You are only wasting money on things if you aren't 
profitable" ?

/action shakes head.

On Tue, 26 Oct 2004, james edwards wrote:

Sorry, this is somewhat OT.
I'm looking for information on energy consumption vs percent utilization.
In other words if your datacenter consumes 720 MWh per month, yet on
average your servers are 98% underutilized, you are wasting a lot of
energy (a hot topic these days). Does anyone here have any real data on
this?
Grisha
It is only waste is the P & L statement is showing no profit.
-- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben --
--Net Access Corporation, 800-NET-ME-36, http://www.nac.net   --



Re: Energy consumption vs % utilization?

2004-10-26 Thread Alex Rubenstein

Hello,
I've done quite a bit of studyin power usage and such in datacenters over 
the last year or so.

I'm looking for information on energy consumption vs percent utilization. In 
other words if your datacenter consumes 720 MWh per month, yet on average 
your servers are 98% underutilized, you are wasting a lot of energy (a hot 
topic these days). Does anyone here have any real data on this?
I've never done a study on power used vs. CPU utilization, but my guess is 
that the heat generated from a PC remains fairly constant -- in the grand 
scheme of things -- no matter what your utilization is.

I say this, because, with a CPU being idle of 100% utilized, they still 
are grossly inefficient, on the order of less than 10% in all cases (ie, 1 
watt in returns at least .9 watts of heat, no matter loading of the CPU).

-- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben --
--Net Access Corporation, 800-NET-ME-36, http://www.nac.net   --



Re: Energy consumption vs % utilization?

2004-10-26 Thread james edwards

>
> Sorry, this is somewhat OT.
>
> I'm looking for information on energy consumption vs percent utilization.
> In other words if your datacenter consumes 720 MWh per month, yet on
> average your servers are 98% underutilized, you are wasting a lot of
> energy (a hot topic these days). Does anyone here have any real data on
> this?
>
> Grisha

It is only waste is the P & L statement is showing no profit.



RE: Energy consumption vs % utilization?

2004-10-26 Thread Hannigan, Martin



This is far more complicated than this. That's why I suggested 
the Datacenters list.

A lot is determined not just by your revenue target per square foot, 
but cooling, your distribution, your breaker density and sizing, etc.

-M<



--
Martin Hannigan (c) 617-388-2663
VeriSign, Inc.  (w) 703-948-7018
Network Engineer IV   Operations & Infrastructure
[EMAIL PROTECTED]



> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
> Nils Ketelsen
> Sent: Tuesday, October 26, 2004 2:09 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Energy consumption vs % utilization?
> 
> 
> 
> On Tue, Oct 26, 2004 at 01:52:51PM -0400, Gregory (Grisha) 
> Trubetskoy wrote:
> 
> > Sorry, this is somewhat OT.
> 
> Also Sorry, but I think the question itself is completely flawed.
> 
> > I'm looking for information on energy consumption vs 
> percent utilization. 
> > In other words if your datacenter consumes 720 MWh per 
> month, yet on 
> > average your servers are 98% underutilized, you are wasting 
> a lot of 
> > energy (a hot topic these days). Does anyone here have any 
> real data on 
> > this?
> 
> What does 98% underutilized mean?
> 
> What is the utilization of a device with fully built out RAM 
> that is used
> to 100%, when the CPU is used 2% only?
> 
> What is the utilization of a system, that uses two percent of the
> memory and two percent of the available CPU time, when the policy
> of the top secret organization owning this system requires, that the
> application is running on a seperated machine?
> 
> Sure many machines might be (computing power wise) able to
> handle Firewalling, Routing, Webserving, Database Serving, 
> Mailserving and
> storing accounting data, but still there might be very good reasons to
> seperate these on different machines.
> 
> If you take points like policy requirement (see above:
> an application might by policy utilize a machine to 100%), 
> different types
> of resources, failover etc. into account, you might end up
> with different numbers then just looking at the CPU (and I
> have the feeling that is what you did or were intending to do).
> 
> Actually I think nobody does calculate "real" utilization,
> as there are a lot of soft factors to be taken into account. 
> 
> Nils
> 


Re: Energy consumption vs % utilization?

2004-10-26 Thread Valdis . Kletnieks
On Tue, 26 Oct 2004 13:52:51 EDT, "Gregory (Grisha) Trubetskoy" said:

> average your servers are 98% underutilized, you are wasting a lot of 

Remember in your analysis to include premature hardware failure due to too many
power cycles...

A server can *easily* "on average" be running at only 20-30% of capacity,
simply because requests arrive at essentially random times - so you have to
deal with the case where "average" over a minute is 20% of capacity for 600
hits (10/sec), but some individual seconds only have 1 hit, and others have 50
(at which point you're running with the meter spiked).

Time-of-day issues also get involved - you may need to have enough iron to
handle the peak load at 2PM, but be sitting mostly idle at 2AM. Unfortunately,
I've seen very few rack-mount boxes that support partial power-down to save
energy - if it's got 2 Xeon processors and 2G of memory, both CPUs and all the
memory cards are hot all the time...

There's also latency issues - if some CPUs on a node or some nodes in a cluster
are powered down, there is a timing lag between when you start firing them up
and when they're ready to go - so you need to walk the very fine line between
"too short a spike powers stuff up needlessly" (very bad for the hardware), and
"too much dampening means you get bottlenecked while waiting for spin-up".

(Been there, done that - there's a 1200-node cluster across the hall, and
there's no really good/easy way to ramp up all 1200 for big jobs and power down
800 nodes if there's only 400-nodes worth of work handy.  So we end up leaving
it all fired up and let the node's "idle loop" be "good enough")..

If it was as easy as all that, we'd all be doing it already.. :)


pgpwJBq4ZdnfW.pgp
Description: PGP signature


Re: Energy consumption vs % utilization?

2004-10-26 Thread Deepak Jain

Actually I think nobody does calculate "real" utilization,
as there are a lot of soft factors to be taken into account. 

Electrical usage for a datacenter is pretty consistent throughout a 
month, even as measured by a sum of days. The utilization of the systems 
inside of it are almost anything but consistent... even during boot up 
it would be nearly impossible to determine the instantaneous necessary 
power draw.

Separately, deploying applications to clusters of machines where the 
cluster is dynamically resized [more machines are turned on/off] 
depending on load is a non-trivial function and outside the operational 
experience/need of most customers.

But even assuming you could do that, the best approximation I could 
imagine for an Internet data center would be something akin to its 
network traffic graph [assumption being that network load amongst a 
stable set of customers is proportionate to the processing power 
required to produce it... even if an individual customer uses much more 
CPU power to do that at a specific time quanta]. Basically, if you use 
1Mb/s at noon on Monday, and 1.2Mb/s at noon on Tuesday with the same 
customer set, you can probably estimate that your system's load is 20% 
higher than it was on Monday. Assuming you aren't operating at either 
the very low extreme or very high extreme. At least that would be my 
thought.

If all applications were designed to virtualized ala mainframe style, 
this clustering concept might work to dynamically redeploy resources... 
However the mainframes themselves are inherently not-smooth-stepped in 
terms of their power/cpu curves, so its probably a dead issue in that 
regard.

Deepak Jain
AiNET


Re: Energy consumption vs % utilization?

2004-10-26 Thread Erik Haagsman

On Tue, 2004-10-26 at 19:52, Gregory (Grisha) Trubetskoy wrote:
> In other words if your datacenter consumes 720 MWh per month, yet on 
> average your servers are 98% underutilized, you are wasting a lot of 
> energy (a hot topic these days). 

Which means you have to make sure the revenue generated by those 98%
underutilized servers covers your powerbill and other expenses,
preferrably leaving some headroom for a healthy profit margin.
As long as that's the case there's no real waste of energy, the services
people run on their servers are supposed to be worth the energy and
other costs, whether they physically fully utilize their power or not.

Cheers,
-- 
---
Erik Haagsman
Network Architect
We Dare BV
tel: +31(0)10 7507008
fax:+31(0)10 7507005
http://www.we-dare.nl




Re: Energy consumption vs % utilization?

2004-10-26 Thread Nils Ketelsen

On Tue, Oct 26, 2004 at 01:52:51PM -0400, Gregory (Grisha) Trubetskoy wrote:

> Sorry, this is somewhat OT.

Also Sorry, but I think the question itself is completely flawed.

> I'm looking for information on energy consumption vs percent utilization. 
> In other words if your datacenter consumes 720 MWh per month, yet on 
> average your servers are 98% underutilized, you are wasting a lot of 
> energy (a hot topic these days). Does anyone here have any real data on 
> this?

What does 98% underutilized mean?

What is the utilization of a device with fully built out RAM that is used
to 100%, when the CPU is used 2% only?

What is the utilization of a system, that uses two percent of the
memory and two percent of the available CPU time, when the policy
of the top secret organization owning this system requires, that the
application is running on a seperated machine?

Sure many machines might be (computing power wise) able to
handle Firewalling, Routing, Webserving, Database Serving, Mailserving and
storing accounting data, but still there might be very good reasons to
seperate these on different machines.

If you take points like policy requirement (see above:
an application might by policy utilize a machine to 100%), different types
of resources, failover etc. into account, you might end up
with different numbers then just looking at the CPU (and I
have the feeling that is what you did or were intending to do).

Actually I think nobody does calculate "real" utilization,
as there are a lot of soft factors to be taken into account. 

Nils


Energy consumption vs % utilization?

2004-10-26 Thread Gregory (Grisha) Trubetskoy

Sorry, this is somewhat OT.
I'm looking for information on energy consumption vs percent utilization. 
In other words if your datacenter consumes 720 MWh per month, yet on 
average your servers are 98% underutilized, you are wasting a lot of 
energy (a hot topic these days). Does anyone here have any real data on 
this?

Grisha