Re: Energy Efficiency - Data Centers

2019-12-18 Thread Eric Kuhnke
The laws of thermodynamics dictate that near 100% of the electricity
consumed by a piece of equipment (let's use a high powered 2RU size router
as an example) comes off as heat. Unless it's doing mechanical physical
work like lifting a load or spinning a fan. Some infinitesimal portion
leaves as photons down the fiber.



On Wed, Dec 18, 2019 at 6:58 AM Rod Beck 
wrote:

> Energy efficiency  is a hobby of mine and most of my properties embody
> Passive House Technology. This led me to wonder what is the inefficiency of
> these servers in data centers. Every time I am in a data center I am
> impressed by how much heat comes off these semiconductor chips. Looks to me
> may be 60% of the electricity ends up as heat.
>
> Regards,
>
> Roderick.
>
> Roderick Beck
> VP of Business Development
>
> United Cable Company
>
> www.unitedcablecompany.com
>
> New York City & Budapest
>
> rod.b...@unitedcablecompany.com
>
> 36-70-605-5144
>
>
> [image: 1467221477350_image005.png]
>


Re: Energy Efficiency - Data Centers

2019-12-18 Thread Thomas Bellman
On 2019-12-18 22:14 CET, Rod Beck wrote:

> Well, the fact that a data center generates a lot of means it is
> consuming a lot of electricity.

Indeed, they are consuming lots of electricity.  But it's easier to
measure that by putting an electricity meter on the incoming power
line (and the power company will insist on that anyway :-) than to
measure the heat given off.

> It is probably a major operating expense.

There's no "probably" about it.  It *is* a major operating expense.
When we bought our latest HPC clusters last year, the estimated cost
of power and cooling over five years, was ca 25% of the cost of the
clusters themselves (hardware, installation, and hardware support).

And yes, the cost for cooling is a large part of "power and cooling",
but it scales pretty linearly with electricity consumption.  Since
every joule electricity consumed is one joule heat that needs to be
removed.

> And while it may be efficient given current technology standards,
> it naturally leads to the question of how we can do better.

Absolutely.  But we are not looking at heat dissipation, but power
consumption.  Heat only comes into it for the cooling, but since
every joule of electricity consumed, becomes one joule of heat to
be removed by cooling, they are one and the same.

> Thermodynamics is only part of the picture. The other part is
> economics. If you see a lot of heat being produced and it is not
> the intended output, then it is anatural focus for improvement.
> My guess is that a lot of corporate research is going into trying
> to reduce chip electricity consumption.

Intel, AMD, ARM, IBM (Power), they are all trying to squeeze out
more performance, less power usage, and more performance per watt.
This affects not only datacenter operating costs, but also things
like battery times in laptops, tablets and phones.  Likewise are
computer manufacturers like HPE, Dell, SuperMicro, Apple, and so
on (but they are mostly beholden to the achivements of the CPU
and RAM manufacturers).

Datacenter operators are trying to lower their power consumptions,
by having more efficient UPS:es, more efficient cooling systems,
and buying more efficient servers and network equipment.

(And as a slight aside, you can sometimes use the heat produced
by the datacenter, and removed by the cooling system, in useful
ways.  I know one DC that used their heat to melt snow in the
parking lot during winter.  I have heard of some DC that fed their
heat into a green-house next door.  And some people have managed
to heat their offices with warm water from their DC; although that
is generally not very easy, as the output water from DCs tend to
have too low temperature to be used that way.)

> So my gut feeling might still be relevant. It is about the level
> of energy consumption, not just the fact electricity becomes
> disorderly molecular gyrations.

Energy consumption is very important.  Efficiency is very important.
My point is that efficiency is measured in *utility* produced per
input power (or input euros, or input man-hours).  *In*efficiency
is, or at least should be, measured in needed input per utility
produced, not in heat left over afterwards.


/Bellman



signature.asc
Description: OpenPGP digital signature


Re: Energy Efficiency - Data Centers

2019-12-18 Thread Rod Beck
Well, the fact that a data center generates a lot of means it is consuming a 
lot of electricity. It is probably a major operating expense. And while it may 
be efficient given current technology standards, it naturally leads to the 
question of how we can do better.

Thermodynamics is only part of the picture. The other part is economics. If you 
see a lot of heat being produced and it is not the intended output, then it is 
anatural focus for improvement. My guess is that a lot of corporate research is 
going into trying to reduce chip electricity consumption.

So my gut feeling might still be relevant. It is about the level of energy 
consumption, not just the fact electricity becomes disorderly molecular 
gyrations.




From: Thomas Bellman
Sent: Wednesday, December 18, 2019 9:57 PM
To: Nanog@nanog.org
Cc: Rod Beck
Subject: Re: Energy Efficiency - Data Centers

On 2019-12-18 20:06 CET, Rod Beck wrote:

> I was reasoning from the analogy that an incandescent bulb is less
> efficient than a LED bulb because more it generates more heat - more
> of the electricity goes into the infrared spectrum than the useful
> visible spectrum. Similar to the way that an electric motor is more
> efficient than a combustion engine.

Still, you should not look at how much heat you get, but how much
utility you get.  Which for a lighting source would be measured in
lumens within the visible spectrum.

If you put in 300 watt of electricity into a computer server, you
will get somewhere between 290 and 299 watts of heat from the server
itself.  The second largest power output will be the kinetic energy
of the air the fans in the server pushes; I'm guestimating that to
be somewhere between 1 and 10 watts (and thus my uncertainty of the
direct heat output above).  Then you get maybe 0.1 watts of sound
energy (noise) and other vibrations in the rack.  And finally, less
than 0.01 watts of light in the network fibers from the server
(assuming dual 40G or dual 100G network connections, i.e. 8 lasers).

Every microwatt of electricity put into the server in order to toggle
bits, keeping bits at their current value, transporting bits within
and between CPU, RAM, motherboard, disks, and so on, will turn into
heat *before* leaving the server.  The only exception being the light
put into the network fibers, and that will be less than 10 milliwatts
for a server.

All inefficiencies in power supplies, power regulators, fans, and
other stuff in the server, will become heat, within the server.

So your estimate of 60% heat, i.e. 40% *non*-heat, is off by at
least a factor ten.  And the majority of the kinetic energy of
the air pushed by the server will have turned into heat after just
a few meters...

So, if you look at how much heat is given off by a server compared
to how much power is put into it, then it is 99.99% inefficient. :-)

But that's just the wrong way to look at it.

In a lighting source, you can measure the amount of visible light
given off in watts.  In an engine (electrical, combustion or other-
wise), you can measure the amount of output in watts.  So in those
cases, efficiency can be measured in percent, as the input and the
output are measured in the same units (watts).

But often a light source is better measured in lumens, not watts.
Sometimes, the torque, measured in Newton-meters, is more relevant
for an engine.  Or thrust, measured in Newtons, for a rocket engine.
Then, dividing the input (W) with the output (lm, Nm, N) does not
give a percentage.

Similarly, the relevant output of a computer is not measured in
watts, but in FLOPS, database transactions/second, or web pages
served per hour.

Basically, the only time the amount of heat given off by a computer
is relevant, is when you are designing and dimensioning the cooling
system.  And then the answer is always "exactly as much as the power
you put *into* the computer". :-)


/Bellman



Re: Energy Efficiency - Data Centers

2019-12-18 Thread Thomas Bellman
On 2019-12-18 20:06 CET, Rod Beck wrote:

> I was reasoning from the analogy that an incandescent bulb is less
> efficient than a LED bulb because more it generates more heat - more
> of the electricity goes into the infrared spectrum than the useful
> visible spectrum. Similar to the way that an electric motor is more
> efficient than a combustion engine.

Still, you should not look at how much heat you get, but how much
utility you get.  Which for a lighting source would be measured in
lumens within the visible spectrum.

If you put in 300 watt of electricity into a computer server, you
will get somewhere between 290 and 299 watts of heat from the server
itself.  The second largest power output will be the kinetic energy
of the air the fans in the server pushes; I'm guestimating that to
be somewhere between 1 and 10 watts (and thus my uncertainty of the
direct heat output above).  Then you get maybe 0.1 watts of sound
energy (noise) and other vibrations in the rack.  And finally, less
than 0.01 watts of light in the network fibers from the server
(assuming dual 40G or dual 100G network connections, i.e. 8 lasers).

Every microwatt of electricity put into the server in order to toggle
bits, keeping bits at their current value, transporting bits within
and between CPU, RAM, motherboard, disks, and so on, will turn into
heat *before* leaving the server.  The only exception being the light
put into the network fibers, and that will be less than 10 milliwatts
for a server.

All inefficiencies in power supplies, power regulators, fans, and
other stuff in the server, will become heat, within the server.

So your estimate of 60% heat, i.e. 40% *non*-heat, is off by at
least a factor ten.  And the majority of the kinetic energy of
the air pushed by the server will have turned into heat after just
a few meters...

So, if you look at how much heat is given off by a server compared
to how much power is put into it, then it is 99.99% inefficient. :-)

But that's just the wrong way to look at it.

In a lighting source, you can measure the amount of visible light
given off in watts.  In an engine (electrical, combustion or other-
wise), you can measure the amount of output in watts.  So in those
cases, efficiency can be measured in percent, as the input and the
output are measured in the same units (watts).

But often a light source is better measured in lumens, not watts.
Sometimes, the torque, measured in Newton-meters, is more relevant
for an engine.  Or thrust, measured in Newtons, for a rocket engine.
Then, dividing the input (W) with the output (lm, Nm, N) does not
give a percentage.

Similarly, the relevant output of a computer is not measured in
watts, but in FLOPS, database transactions/second, or web pages
served per hour.

Basically, the only time the amount of heat given off by a computer
is relevant, is when you are designing and dimensioning the cooling
system.  And then the answer is always "exactly as much as the power
you put *into* the computer". :-)


/Bellman



signature.asc
Description: OpenPGP digital signature


Re: Energy Efficiency - Data Centers

2019-12-18 Thread Damian Menscher via NANOG
On Wed, Dec 18, 2019 at 10:48 AM Thomas Bellman  wrote:

> On 2019-12-18 15:57, Rod Beck wrote:
>
> > This led me to wonder what is the inefficiency of these servers in data>
> centers. Every time I am in a data center I am impressed by how much> heat
> comes off these semiconductor chips. Looks to me may be 60% of the>
> electricity ends up as heat.
> What are you expecting the remaining 40% of the electricity ends up as?
>
> There is another efficiency number that many datacenters look at, which
> is PUE, Power Usage Effectiveness.  That is a measure of the total energy
> used by the DC compared to the energy used for "IT load".  The differece
> being in cooling/ventilation, UPS:es, lighting, and similar stuff.
> However, there are several deficiencies with this metric, for example:
>
>  - IT load is just watts (or joules) pushed into your servers, and does
>not account for if you are using old, inefficient Cray 1 machines or
>modern AMD EPYC / Intel Skylake PCs.
>
>  - Replace fans in servers with larger, more efficient fans in the rack
>doors, and the IT load decreases while the DC "losses" increase,
>leading to higher (worse) PUE, even though you might have lowered your
>total energy usage.
>
>  - Get your cooling water as district cooling instead of running your own
>chillers, and you are no longer using electricity for the chillers,
>improving your PUE.  There are still chillers run, using energy, but
>that energy does not show up on your DC's electricity bill...
>
> This doesn't mean that the PUE value is *entirely* worthless.  It did
> help in putting efficiency into focus.  There used to be datacenters
> that had PUE numbers close to, or even over, 2.0, due to having horribly
> inefficient cooling systems, UPS:es and so on.  But once you get down
> to the 1.2-1.3 range or below, you really need to look at the details
> of *how* the DC achieved the PUE number; a single number doesn't capture
> the nuances.
>

Google has some information on PUE at
https://www.google.com/about/datacenters/efficiency/ -- the tl;dr is that
we have a datacenter PUE of 1.06, and a campus (including power substation)
PUE of 1.11.  By comparison, most large datacenters average around 1.67.

Damian


Re: Energy Efficiency - Data Centers

2019-12-18 Thread Rod Beck
No doubt. Not trying to repeal the second law of thermodynamics. 

I visited Boltzman's grave in Vienna and this equation was on it: S=k*logW. 
Would not want to disturb his sleep. 


From: Ben Cannon 
Sent: Wednesday, December 18, 2019 8:11 PM
To: Rod Beck 
Cc: Thomas Bellman ; NANOG Operators' Group 

Subject: Re: Energy Efficiency - Data Centers

It is overwhelmingly disposed of as heat, even all useful work.  The amount of 
energy leaving a DC in fiber cables, etc is perhaps a millionth of one percent.

Even in your lightbulb example, if the light is used inside a room, it gets 
turned back into heat once it hits the walls.

So in a closed system, it’s all heat.

Now, power is lost before it can be used for compute/routing, mostly in power 
conversions.  Of which there are many in most DCs.  Companies like Facebook and 
Amazon have done a lot of work to remove excess power conversion steps, to 
chase better PUE (Power Unit Efficiency) and get more electricity to the 
computers before losing it as excess heat in voltage conversions.  There’s 
still room for improvement here, and the power wasted here goes directly to 
heat before doing any other useful work.

Source: I have a C-20 HVAC license and own and operate 2 datacenters.

-Ben.


-Ben Cannon
CEO 6x7 Networks & 6x7 Telecom, LLC
b...@6by7.net<mailto:b...@6by7.net>


[cid:245ADEA1-477E-4B5A-989E-9177BDB798AE]

On Dec 18, 2019, at 11:06 AM, Rod Beck 
mailto:rod.b...@unitedcablecompany.com>> wrote:

I was reasoning from the analogy that an incandescent bulb is less efficient 
than a LED bulb because more it generates more heat - more of the electricity 
goes into the infrared spectrum than the useful visible spectrum. Similar to 
the way that an electric motor is more efficient than a combustion engine.




From: Thomas Bellman
Sent: Wednesday, December 18, 2019 7:47 PM
To: Nanog@nanog.org<mailto:Nanog@nanog.org>
Cc: Rod Beck
Subject: Re: Energy Efficiency - Data Centers

On 2019-12-18 15:57, Rod Beck wrote:

> This led me to wonder what is the inefficiency of these servers in data> 
> centers. Every time I am in a data center I am impressed by how much> heat 
> comes off these semiconductor chips. Looks to me may be 60% of the> 
> electricity ends up as heat.
What are you expecting the remaining 40% of the electricity ends up as?

In reality, at least 99% of the electricity input to a datacenter ends up
as heat within the DC.  The remaining <1% turns into things like:

 - electricity and light leaving the DC in network cables (but will
   turn into heat in the cable and at the receiving end)
 - sound energy (noise) that escapes the DC building (but will turn
   into heat later on as the sound attenuates)
 - electric and magnetic potential energy in the form of stored bits
   on flash memory, hard disks and tapes (but that will turn into heat
   as you store new bits over the old bits)

(I'm saying <1%, but I'm actually expecting it to be *much* less than
one percent.)

This is basic physics.  First law of thermodynamics: you can't destroy
(or create) energy, just convert it.  Second law: all energy turns into
heat energy in the end. :-)


You are really asking the wrong question.  Efficiency is not measured
in how little of the input energy is turned into heat, but in how much
*utility* you get out of a certain amount of input energy.  In case of
a datacenter, utility might be measured in number of database transac-
tions performed, floating point operations executed, scientific articles
published in Nature (by academic researchers using your HPC datacenter),
or advertisments pushed to the users of your search engine.


There is another efficiency number that many datacenters look at, which
is PUE, Power Usage Effectiveness.  That is a measure of the total energy
used by the DC compared to the energy used for "IT load".  The differece
being in cooling/ventilation, UPS:es, lighting, and similar stuff.
However, there are several deficiencies with this metric, for example:

 - IT load is just watts (or joules) pushed into your servers, and does
   not account for if you are using old, inefficient Cray 1 machines or
   modern AMD EPYC / Intel Skylake PCs.

 - Replace fans in servers with larger, more efficient fans in the rack
   doors, and the IT load decreases while the DC "losses" increase,
   leading to higher (worse) PUE, even though you might have lowered your
   total energy usage.

 - Get your cooling water as district cooling instead of running your own
   chillers, and you are no longer using electricity for the chillers,
   improving your PUE.  There are still chillers run, using energy, but
   that energy does not show up on your DC's electricity bill...

This doesn't mean that the PUE value is *entirely* worthless.  It did
help in putting efficiency into focus.  There used to be datacenters
that had PUE nu

Re: Energy Efficiency - Data Centers

2019-12-18 Thread Ben Cannon
It is overwhelmingly disposed of as heat, even all useful work.  The amount of 
energy leaving a DC in fiber cables, etc is perhaps a millionth of one percent.

Even in your lightbulb example, if the light is used inside a room, it gets 
turned back into heat once it hits the walls. 

So in a closed system, it’s all heat.   

Now, power is lost before it can be used for compute/routing, mostly in power 
conversions.  Of which there are many in most DCs.  Companies like Facebook and 
Amazon have done a lot of work to remove excess power conversion steps, to 
chase better PUE (Power Unit Efficiency) and get more electricity to the 
computers before losing it as excess heat in voltage conversions.  There’s 
still room for improvement here, and the power wasted here goes directly to 
heat before doing any other useful work.

Source: I have a C-20 HVAC license and own and operate 2 datacenters.

-Ben.


-Ben Cannon
CEO 6x7 Networks & 6x7 Telecom, LLC 
b...@6by7.net <mailto:b...@6by7.net>




> On Dec 18, 2019, at 11:06 AM, Rod Beck  
> wrote:
> 
> I was reasoning from the analogy that an incandescent bulb is less efficient 
> than a LED bulb because more it generates more heat - more of the electricity 
> goes into the infrared spectrum than the useful visible spectrum. Similar to 
> the way that an electric motor is more efficient than a combustion engine. 
> 
> 
> 
> From: Thomas Bellman
> Sent: Wednesday, December 18, 2019 7:47 PM
> To: Nanog@nanog.org <mailto:Nanog@nanog.org>
> Cc: Rod Beck
> Subject: Re: Energy Efficiency - Data Centers
> 
> On 2019-12-18 15:57, Rod Beck wrote:
> 
> > This led me to wonder what is the inefficiency of these servers in data> 
> > centers. Every time I am in a data center I am impressed by how much> heat 
> > comes off these semiconductor chips. Looks to me may be 60% of the> 
> > electricity ends up as heat.
> What are you expecting the remaining 40% of the electricity ends up as?
> 
> In reality, at least 99% of the electricity input to a datacenter ends up
> as heat within the DC.  The remaining <1% turns into things like:
> 
>  - electricity and light leaving the DC in network cables (but will
>turn into heat in the cable and at the receiving end)
>  - sound energy (noise) that escapes the DC building (but will turn
>into heat later on as the sound attenuates)
>  - electric and magnetic potential energy in the form of stored bits
>on flash memory, hard disks and tapes (but that will turn into heat
>as you store new bits over the old bits)
> 
> (I'm saying <1%, but I'm actually expecting it to be *much* less than
> one percent.)
> 
> This is basic physics.  First law of thermodynamics: you can't destroy
> (or create) energy, just convert it.  Second law: all energy turns into
> heat energy in the end. :-)
> 
> 
> You are really asking the wrong question.  Efficiency is not measured
> in how little of the input energy is turned into heat, but in how much
> *utility* you get out of a certain amount of input energy.  In case of
> a datacenter, utility might be measured in number of database transac-
> tions performed, floating point operations executed, scientific articles
> published in Nature (by academic researchers using your HPC datacenter),
> or advertisments pushed to the users of your search engine.
> 
> 
> There is another efficiency number that many datacenters look at, which
> is PUE, Power Usage Effectiveness.  That is a measure of the total energy
> used by the DC compared to the energy used for "IT load".  The differece
> being in cooling/ventilation, UPS:es, lighting, and similar stuff.
> However, there are several deficiencies with this metric, for example:
> 
>  - IT load is just watts (or joules) pushed into your servers, and does
>not account for if you are using old, inefficient Cray 1 machines or
>modern AMD EPYC / Intel Skylake PCs.
> 
>  - Replace fans in servers with larger, more efficient fans in the rack
>doors, and the IT load decreases while the DC "losses" increase,
>leading to higher (worse) PUE, even though you might have lowered your
>total energy usage.
> 
>  - Get your cooling water as district cooling instead of running your own
>chillers, and you are no longer using electricity for the chillers,
>improving your PUE.  There are still chillers run, using energy, but
>that energy does not show up on your DC's electricity bill...
> 
> This doesn't mean that the PUE value is *entirely* worthless.  It did
> help in putting efficiency into focus.  There used to be datacenters
> that had PUE numbers close to, or even over, 2.0, due to having horribly
> inefficient cooling systems, UPS:es and so on.  But once you get down
> to the 1.2-1.3 range or below, you really need to look at the details
> of *how* the DC achieved the PUE number; a single number doesn't capture
> the nuances.
> 
> 
> /Bellman



Re: Energy Efficiency - Data Centers

2019-12-18 Thread Rod Beck
I was reasoning from the analogy that an incandescent bulb is less efficient 
than a LED bulb because more it generates more heat - more of the electricity 
goes into the infrared spectrum than the useful visible spectrum. Similar to 
the way that an electric motor is more efficient than a combustion engine.




From: Thomas Bellman
Sent: Wednesday, December 18, 2019 7:47 PM
To: Nanog@nanog.org
Cc: Rod Beck
Subject: Re: Energy Efficiency - Data Centers

On 2019-12-18 15:57, Rod Beck wrote:

> This led me to wonder what is the inefficiency of these servers in data> 
> centers. Every time I am in a data center I am impressed by how much> heat 
> comes off these semiconductor chips. Looks to me may be 60% of the> 
> electricity ends up as heat.
What are you expecting the remaining 40% of the electricity ends up as?

In reality, at least 99% of the electricity input to a datacenter ends up
as heat within the DC.  The remaining <1% turns into things like:

 - electricity and light leaving the DC in network cables (but will
   turn into heat in the cable and at the receiving end)
 - sound energy (noise) that escapes the DC building (but will turn
   into heat later on as the sound attenuates)
 - electric and magnetic potential energy in the form of stored bits
   on flash memory, hard disks and tapes (but that will turn into heat
   as you store new bits over the old bits)

(I'm saying <1%, but I'm actually expecting it to be *much* less than
one percent.)

This is basic physics.  First law of thermodynamics: you can't destroy
(or create) energy, just convert it.  Second law: all energy turns into
heat energy in the end. :-)


You are really asking the wrong question.  Efficiency is not measured
in how little of the input energy is turned into heat, but in how much
*utility* you get out of a certain amount of input energy.  In case of
a datacenter, utility might be measured in number of database transac-
tions performed, floating point operations executed, scientific articles
published in Nature (by academic researchers using your HPC datacenter),
or advertisments pushed to the users of your search engine.


There is another efficiency number that many datacenters look at, which
is PUE, Power Usage Effectiveness.  That is a measure of the total energy
used by the DC compared to the energy used for "IT load".  The differece
being in cooling/ventilation, UPS:es, lighting, and similar stuff.
However, there are several deficiencies with this metric, for example:

 - IT load is just watts (or joules) pushed into your servers, and does
   not account for if you are using old, inefficient Cray 1 machines or
   modern AMD EPYC / Intel Skylake PCs.

 - Replace fans in servers with larger, more efficient fans in the rack
   doors, and the IT load decreases while the DC "losses" increase,
   leading to higher (worse) PUE, even though you might have lowered your
   total energy usage.

 - Get your cooling water as district cooling instead of running your own
   chillers, and you are no longer using electricity for the chillers,
   improving your PUE.  There are still chillers run, using energy, but
   that energy does not show up on your DC's electricity bill...

This doesn't mean that the PUE value is *entirely* worthless.  It did
help in putting efficiency into focus.  There used to be datacenters
that had PUE numbers close to, or even over, 2.0, due to having horribly
inefficient cooling systems, UPS:es and so on.  But once you get down
to the 1.2-1.3 range or below, you really need to look at the details
of *how* the DC achieved the PUE number; a single number doesn't capture
the nuances.


/Bellman



Re: Energy Efficiency - Data Centers

2019-12-18 Thread Thomas Bellman
On 2019-12-18 15:57, Rod Beck wrote:

> This led me to wonder what is the inefficiency of these servers in data> 
> centers. Every time I am in a data center I am impressed by how much> heat 
> comes off these semiconductor chips. Looks to me may be 60% of the> 
> electricity ends up as heat.
What are you expecting the remaining 40% of the electricity ends up as?

In reality, at least 99% of the electricity input to a datacenter ends up
as heat within the DC.  The remaining <1% turns into things like:

 - electricity and light leaving the DC in network cables (but will
   turn into heat in the cable and at the receiving end)
 - sound energy (noise) that escapes the DC building (but will turn
   into heat later on as the sound attenuates)
 - electric and magnetic potential energy in the form of stored bits
   on flash memory, hard disks and tapes (but that will turn into heat
   as you store new bits over the old bits)

(I'm saying <1%, but I'm actually expecting it to be *much* less than
one percent.)

This is basic physics.  First law of thermodynamics: you can't destroy
(or create) energy, just convert it.  Second law: all energy turns into
heat energy in the end. :-)


You are really asking the wrong question.  Efficiency is not measured
in how little of the input energy is turned into heat, but in how much
*utility* you get out of a certain amount of input energy.  In case of
a datacenter, utility might be measured in number of database transac-
tions performed, floating point operations executed, scientific articles
published in Nature (by academic researchers using your HPC datacenter),
or advertisments pushed to the users of your search engine.


There is another efficiency number that many datacenters look at, which
is PUE, Power Usage Effectiveness.  That is a measure of the total energy
used by the DC compared to the energy used for "IT load".  The differece
being in cooling/ventilation, UPS:es, lighting, and similar stuff.
However, there are several deficiencies with this metric, for example:

 - IT load is just watts (or joules) pushed into your servers, and does
   not account for if you are using old, inefficient Cray 1 machines or
   modern AMD EPYC / Intel Skylake PCs.

 - Replace fans in servers with larger, more efficient fans in the rack
   doors, and the IT load decreases while the DC "losses" increase,
   leading to higher (worse) PUE, even though you might have lowered your
   total energy usage.

 - Get your cooling water as district cooling instead of running your own
   chillers, and you are no longer using electricity for the chillers,
   improving your PUE.  There are still chillers run, using energy, but
   that energy does not show up on your DC's electricity bill...

This doesn't mean that the PUE value is *entirely* worthless.  It did
help in putting efficiency into focus.  There used to be datacenters
that had PUE numbers close to, or even over, 2.0, due to having horribly
inefficient cooling systems, UPS:es and so on.  But once you get down
to the 1.2-1.3 range or below, you really need to look at the details
of *how* the DC achieved the PUE number; a single number doesn't capture
the nuances.


/Bellman



signature.asc
Description: OpenPGP digital signature


Re: Energy Efficiency - Data Centers

2019-12-18 Thread me...@fiberhood.nl
In our current project we deploy a distributed datacenter in peoples homes with 
120 Gbps fiber. We connect the water cooling of the CPU and GPU directly into 
the heating of homes and offices. Power comes from $0,02 per kW solar and wind 
in the neighbourhood. The waste heat is not wasted, it is sold to the homes and 
offices. This saves up to 95% in energy at 30% the capex.


> On 18 Dec 2019, at 18:10, William Herrin  wrote:
> 
> On Wed, Dec 18, 2019 at 8:32 AM me...@fiberhood.nl  wrote:
>> The full talk by Amory Lovins of the Rocky Mountain View: 
>> https://youtu.be/wY_js13AuRk?t=1343
> 
> Hi Merik,
> 
> This aligns with what I'd expect. Essentially every watt of
> electricity in to the data center is a watt of heat that must be
> removed from the data center. Did you know some computer room air
> conditioners actually cool the air at fixed compression and then
> re-heat it with a resistive electric element to reach the desired
> cooling output? Insane!
> 
> Regards,
> Bill Herrin
> 
> 
> -- 
> William Herrin
> b...@herrin.us
> https://bill.herrin.us/



Re: Energy Efficiency - Data Centers

2019-12-18 Thread Rod Beck
I guess that is one reason why Google built a huge data center in Finland. 
Access to very cool water. Not to mention good wholesale electricity rates. And 
yes, since the electricity is not converted into mechanical work, it must all 
end up as heat.

Regards,

Roderick.


From: William Herrin 
Sent: Wednesday, December 18, 2019 6:10 PM
To: me...@fiberhood.nl 
Cc: Rod Beck ; nanog@nanog.org 

Subject: Re: Energy Efficiency - Data Centers

On Wed, Dec 18, 2019 at 8:32 AM me...@fiberhood.nl  wrote:
> The full talk by Amory Lovins of the Rocky Mountain View: 
> https://youtu.be/wY_js13AuRk?t=1343

Hi Merik,

This aligns with what I'd expect. Essentially every watt of
electricity in to the data center is a watt of heat that must be
removed from the data center. Did you know some computer room air
conditioners actually cool the air at fixed compression and then
re-heat it with a resistive electric element to reach the desired
cooling output? Insane!

Regards,
Bill Herrin


--
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Energy Efficiency - Data Centers

2019-12-18 Thread William Herrin
On Wed, Dec 18, 2019 at 8:32 AM me...@fiberhood.nl  wrote:
> The full talk by Amory Lovins of the Rocky Mountain View: 
> https://youtu.be/wY_js13AuRk?t=1343

Hi Merik,

This aligns with what I'd expect. Essentially every watt of
electricity in to the data center is a watt of heat that must be
removed from the data center. Did you know some computer room air
conditioners actually cool the air at fixed compression and then
re-heat it with a resistive electric element to reach the desired
cooling output? Insane!

Regards,
Bill Herrin


-- 
William Herrin
b...@herrin.us
https://bill.herrin.us/


Re: Energy Efficiency - Data Centers

2019-12-18 Thread me...@fiberhood.nl
> On 18 Dec 2019, at 15:57, Rod Beck  wrote:
> 
> Energy efficiency  is a hobby of mine and most of my properties embody 
> Passive House Technology. This led me to wonder what is the inefficiency of 
> these servers in data centers. Every time I am in a data center I am 
> impressed by how much heat comes off these semiconductor chips. Looks to me 
> may be 60% of the electricity ends up as heat. 

Less than a 100.000th of the energy in a data center is used to run the 
applications in a datacenter as summarised in this graph:


The full talk by Amory Lovins of the Rocky Mountain View: 
https://youtu.be/wY_js13AuRk?t=1343 

My research group has come up with supporting evidence for these claims. Our 
Wafer Scale Integration and new operating system software can actually achieve 
these savings.

Merik Voswinkel
Metamorph research institute


> 



Re: Energy Efficiency - Data Centers

2019-12-18 Thread Ethan O'Toole

Passive House Technology. This led me to wonder what is the inefficiency of
these servers in data centers. Every time I am in a data center I am


Probably all the bad software.

- Ethan