Re: Colocation in the US.

2007-01-25 Thread Marshall Eubanks



On Jan 25, 2007, at 3:56 PM, Warren Kumari wrote:




On Jan 25, 2007, at 12:49 PM, Warren Kumari wrote:

The main issue with Flourinert is price -- I wanted some to cool a  
20W IR laser -- I didn't spend that much time looking before I  
just decided to switch to distilled water, but I was finding  
prices like >$300 for a 1 liter bottle (http://www.parallax- 
tech.com/fluorine.htm). I did find some cheaper "recycled"  
Fluorinert, but it wasn't *that* much cheaper.


I don't remember who made them, but the same laser had these  
really neat plumbing connections


Doh, 10 seconds after hitting send it occurred to me that some sort  
of Internet search thingie might help with this -- looking for  
"liquid disconnect" found them for me -- http://www.micromatic.com/ 
draft-keg-beer/fittings-pid-60600.html  -- even better, it seems  
that after your datacenter shuts down you can reuse the connectors  
for your daft keg! :-)


Thereby giving new meaning to "beer and gear."




W

-- very similar to the air hose connectors on air compressors  --  
there is a nipple that snaps into a female connector. The nipple  
pushes in a pin when it snaps in and allows the liquid to start  
flowing. When you disconnect the connector the liquid flow shuts  
off and you get maybe half a teaspoon of leakage.


W

P.S: Sorry if I tripped anyones HR policies for NSFW content :-)

On Jan 25, 2007, at 12:01 PM, John Curran wrote:



At 3:49 PM -0800 1/24/07, Mike Lyon wrote:
I think if someone finds a workable non-conductive cooling fluid  
that

would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting,  
flooding

and electricuting everyone and everything.


http://en.wikipedia.org/wiki/Fluorinert

/John



--
"He who laughs last, thinks slowest."
-- Anonymous




--
"Real children don't go hoppity-skip unless they are on drugs."

-- Susan, the ultimate sensible governess (Terry Pratchett,  
Hogfather)









Re: Colocation in the US.

2007-01-25 Thread Warren Kumari



On Jan 25, 2007, at 12:49 PM, Warren Kumari wrote:

The main issue with Flourinert is price -- I wanted some to cool a  
20W IR laser -- I didn't spend that much time looking before I just  
decided to switch to distilled water, but I was finding prices like  
>$300 for a 1 liter bottle (http://www.parallax-tech.com/ 
fluorine.htm). I did find some cheaper "recycled" Fluorinert, but  
it wasn't *that* much cheaper.


I don't remember who made them, but the same laser had these really  
neat plumbing connections


Doh, 10 seconds after hitting send it occurred to me that some sort  
of Internet search thingie might help with this -- looking for  
"liquid disconnect" found them for me -- http://www.micromatic.com/ 
draft-keg-beer/fittings-pid-60600.html  -- even better, it seems that  
after your datacenter shuts down you can reuse the connectors for  
your daft keg! :-)


W

-- very similar to the air hose connectors on air compressors  --  
there is a nipple that snaps into a female connector. The nipple  
pushes in a pin when it snaps in and allows the liquid to start  
flowing. When you disconnect the connector the liquid flow shuts  
off and you get maybe half a teaspoon of leakage.


W

P.S: Sorry if I tripped anyones HR policies for NSFW content :-)

On Jan 25, 2007, at 12:01 PM, John Curran wrote:



At 3:49 PM -0800 1/24/07, Mike Lyon wrote:
I think if someone finds a workable non-conductive cooling fluid  
that

would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting,  
flooding

and electricuting everyone and everything.


http://en.wikipedia.org/wiki/Fluorinert

/John



--
"He who laughs last, thinks slowest."
-- Anonymous




--
"Real children don't go hoppity-skip unless they are on drugs."

-- Susan, the ultimate sensible governess (Terry Pratchett,  
Hogfather)







Re: Colocation in the US.

2007-01-25 Thread Warren Kumari


The main issue with Flourinert is price -- I wanted some to cool a  
20W IR laser -- I didn't spend that much time looking before I just  
decided to switch to distilled water, but I was finding prices like > 
$300 for a 1 liter bottle (http://www.parallax-tech.com/ 
fluorine.htm). I did find some cheaper "recycled" Fluorinert, but it  
wasn't *that* much cheaper.


I don't remember who made them, but the same laser had these really  
neat plumbing connections -- very similar to the air hose connectors  
on air compressors  -- there is a nipple that snaps into a female  
connector. The nipple pushes in a pin when it snaps in and allows the  
liquid to start flowing. When you disconnect the connector the liquid  
flow shuts off and you get maybe half a teaspoon of leakage.


W

P.S: Sorry if I tripped anyones HR policies for NSFW content :-)

On Jan 25, 2007, at 12:01 PM, John Curran wrote:



At 3:49 PM -0800 1/24/07, Mike Lyon wrote:

I think if someone finds a workable non-conductive cooling fluid that
would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting, flooding
and electricuting everyone and everything.


http://en.wikipedia.org/wiki/Fluorinert

/John



--
"He who laughs last, thinks slowest."
-- Anonymous




Re: Colocation in the US.

2007-01-25 Thread Sean Donelan


On Thu, 25 Jan 2007, Bill Woodcock wrote:
Obviously convection is the best way, and I've gotten away with it a 
few times myself, but the usual answer to your "why not" question is 
"Fire codes."  Convection drives the intensity and spread of fires. 
Which is what furnace chimneys are for.  Thus all the controls on plenum 
spaces.  But when you can get away with it, it's great.


Lots of codes, energy conservation codes, fire codes, building codes, etc.

Although people may gripe about power and cooling, one fire can really 
ruin your day.  Energy can turn into heat in either controlled or 
uncontrolled ways.


Anyone who is interested should look at the conference proceedings and
papers published by ASHRAE.  There was a presentation a few years ago which
explained why Equinix, although I don't think it used the name, has those 
30foot high ceilings painted black :-)  Some of the energy conservation 
codes cost energy when applied to areas beyond their design assumptions.


I noticed AT&T changed its standards.  NEBS still applies in COs, but AT&T 
has published new standards for what they call Internet data spaces. 
Nothing earthshaking, but it was interesting to see some of AT&T's risks 
assessments change.


As I suggested years ago, equipment design, cooling design, power design, 
geography design are all inter-related.  Which is the premium for you?


Re: Colocation in the US.

2007-01-25 Thread John Curran

At 3:49 PM -0800 1/24/07, Mike Lyon wrote:
>I think if someone finds a workable non-conductive cooling fluid that
>would probably be the best thing. I fear the first time someone is
>working near their power outlets and water starts squirting, flooding
>and electricuting everyone and everything.

http://en.wikipedia.org/wiki/Fluorinert

/John


Re: Colocation in the US.

2007-01-25 Thread Bill Woodcock
Obviously convection is the best way, and I've gotten away with it a few times 
myself, but the usual answer to your "why not" question is "Fire codes."  
Convection drives the intensity and spread of fires.   Which is what furnace 
chimneys are for.  Thus all the controls on plenum spaces.  But when you can 
get away with it, it's great. 

 -Bill

Please excuse the brevity of this message; I typed it on my pager. I could be 
more loquacious, but then I'd crash my car.

-Original Message-
From: Paul Vixie <[EMAIL PROTECTED]>
Date: Thu, 25 Jan 2007 14:44:21 
To:nanog@merit.edu
Subject: Re: Colocation in the US.


> How long before we rediscover the smokestack? After all, a colo is an
> industrial facility.  A cellar beneath, a tall stack on top, and let physics
> do the rest.

odd that you should say that.  when building out in a warehouse with 28 foot
ceilings, i've just spec'd raised floor (which i usually hate, but it's safe
if you screw all the tiles down) with horizontal cold air input, and return
air to be taken from the ceiling level.  i agree that it would be lovely to
just vent the hot air straight out and pull all new air rather than just 
make up air from some kind of ground-level outside source... but then i'd
have to run the dehumidifier on a 100% duty cycle.  so it's 20% make up air
like usual.  but i agree, use the physics.  convected air can gather speed,
and i'd rather pull it down than suck it up.  woefully do i recall the times
i've built out under t-bar.  hot aisles, cold aisles.  gack.

> Anyway, "RJ45 for Water" is a cracking idea.  I wouldn't be surprised if
> there aren't already standardised pipe connectors in use elsewhere - perhaps
> the folks on NAWOG (North American Water Operators Group) could help?  Or
> alt.plumbers.pipe? But seriously folks, if the plumbers don't have that,
> then other people who use a lot of flexible pipework might.  Medical,
> automotive, or aerospace come to mind.

the wonderful thing about standards is, there are so many to choose from.
knuerr didn't invent the fittings they're using, but, i'll betcha they aren't
the same as the fittings used by any of their competitors.  not yet anyway.

> All I can think of about that link is a voice saying "Genius - or Madman?"

this thread was off topic until you said that.


Re: Colocation in the US.

2007-01-25 Thread Alexander Harrowell


On 1/25/07, Paul Vixie <[EMAIL PROTECTED]> wrote:


> How long before we rediscover the smokestack? After all, a colo is an
> industrial facility.  A cellar beneath, a tall stack on top, and let physics
> do the rest.

odd that you should say that.  when building out in a warehouse with 28 foot
ceilings, i've just spec'd raised floor (which i usually hate, but it's safe
if you screw all the tiles down) with horizontal cold air input, and return
air to be taken from the ceiling level.  i agree that it would be lovely to
just vent the hot air straight out and pull all new air rather than just
make up air from some kind of ground-level outside source... but then i'd
have to run the dehumidifier on a 100% duty cycle.  so it's 20% make up air
like usual.  but i agree, use the physics.  convected air can gather speed,
and i'd rather pull it down than suck it up.  woefully do i recall the times
i've built out under t-bar.  hot aisles, cold aisles.  gack.


Seriously - all those big old mills that got turned into posh
apartments for the CEO's son. Eight floors of data centre and a 200
foot high stack, and usually an undercroft as the cold-source. And
usually loads of conduit everywhere for the cat5 and power. (In the UK
a lot of them are next to a canal, but I doubt greens would let you
get away with dumping hot water.)


Re: Colocation in the US.

2007-01-25 Thread Paul Vixie

> How long before we rediscover the smokestack? After all, a colo is an
> industrial facility.  A cellar beneath, a tall stack on top, and let physics
> do the rest.

odd that you should say that.  when building out in a warehouse with 28 foot
ceilings, i've just spec'd raised floor (which i usually hate, but it's safe
if you screw all the tiles down) with horizontal cold air input, and return
air to be taken from the ceiling level.  i agree that it would be lovely to
just vent the hot air straight out and pull all new air rather than just 
make up air from some kind of ground-level outside source... but then i'd
have to run the dehumidifier on a 100% duty cycle.  so it's 20% make up air
like usual.  but i agree, use the physics.  convected air can gather speed,
and i'd rather pull it down than suck it up.  woefully do i recall the times
i've built out under t-bar.  hot aisles, cold aisles.  gack.

> Anyway, "RJ45 for Water" is a cracking idea.  I wouldn't be surprised if
> there aren't already standardised pipe connectors in use elsewhere - perhaps
> the folks on NAWOG (North American Water Operators Group) could help?  Or
> alt.plumbers.pipe? But seriously folks, if the plumbers don't have that,
> then other people who use a lot of flexible pipework might.  Medical,
> automotive, or aerospace come to mind.

the wonderful thing about standards is, there are so many to choose from.
knuerr didn't invent the fittings they're using, but, i'll betcha they aren't
the same as the fittings used by any of their competitors.  not yet anyway.

> All I can think of about that link is a voice saying "Genius - or Madman?"

this thread was off topic until you said that.


Re: Colocation in the US.

2007-01-25 Thread Alexander Harrowell


How long before we rediscover the smokestack? After all, a colo is an
industrial facility. A cellar beneath, a tall stack on top, and let
physics do the rest.

Anyway, "RJ45 for Water" is a cracking idea. I wouldn't be surprised
if there aren't already standardised pipe connectors in use elsewhere
- perhaps the folks on NAWOG (North American Water Operators Group)
could help? Or alt.plumbers.pipe? But seriously folks, if the plumbers
don't have that, then other people who use a lot of flexible pipework
might. Medical, automotive, or aerospace come to mind.

All I can think of about that link is a voice saying "Genius - or Madman?"


Re: Colocation in the US.

2007-01-24 Thread Paul Vixie

> If you have water for the racks:

we've all gotta have water for the chillers. (compressors pull too much power,
gotta use cooling towers outside.)

> http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame

i love knuerr's stuff.  and with mainframes or blade servers or any other
specialized equipment that has to come all the way down when it's maintained,
it's a fine solution.  but if you need a tech to work on the rack for an
hour, because the rack is full of general purpose 1U's, and you can't do it
because you can't leave the door open that long, then internal heat exchangers
are the wrong solution.

knuerr also makes what they call a "CPU cooler" which adds a top-to-bottom
liquid manifold system for cold and return water, and offers connections to
multiple devices in the rack.  by collecting the heat directly through paste
and aluminum and liquid, and not depending on moving-air, huge efficiency 
gains are possible.  and you can dispatch a tech for hours on end without
having to power off anything in the rack except whatever's being serviced.
note that by "CPU" they mean "rackmount server" in nanog terminology.  CPU's
are not the only source of heat, by a long shot.  knuerr's stuff is expensive
and there's no standard for it so you need knuerr-compatible servers so far.

i envision a stage in the development of 19-inch rack mount stuff, where in
addition to console (serial for me, KVM for everybody else), power, ethernet,
and IPMI or ILO or whatever, there are two new standard connectors on the
back of every server, and we've all got boxes of standard pigtails to connect
them to the rack.  one will be cold water, the other will be return water.
note that when i rang this bell at MFN in 2001, there was no standard nor any
hope of a standard.  today there's still no standard but there IS hope for one.

> (there are other vendors too, of course)

somehow we've got standards for power, ethernet, serial, and KVM.  we need
a standard for cold and return water.  then server vendors can use conduction
and direct transfer rather than forced air and convection.  between all the
fans in the boxes and all the motors in the chillers and condensers and
compressors, we probably cause 60% of datacenter related carbon for cooling.
with just cooling towers and pumps it ought to be more like 15%.  maybe
google will decide that a 50% savings on their power bill (or 50% more
computes per hydroelectric dam) is worth sinking some leverage into this.

> http://www.spraycool.com/technology/index.asp

that's just creepy.  safe, i'm sure, but i must be old, because it's creepy.


Re: Colocation in the US.

2007-01-24 Thread Tony Varriale


How about CO2?

tv
- Original Message - 
From: "Mike Lyon" <[EMAIL PROTECTED]>

To: "Brandon Galbraith" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; "Paul Vixie" <[EMAIL PROTECTED]>; 
Sent: Wednesday, January 24, 2007 5:49 PM
Subject: Re: Colocation in the US.




I think if someone finds a workable non-conductive cooling fluid that
would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting, flooding
and electricuting everyone and everything.

-Mike


On 1/24/07, Brandon Galbraith <[EMAIL PROTECTED]> wrote:

On 1/24/07, Deepak Jain <[EMAIL PROTECTED]> wrote:
>
>
> Speaking as the operator of at least one datacenter that was originally
> built to water cool mainframes... Water is not hard to deal with, but 
> it

> has its own discipline, especially when you are dealing with lots of it
> (flow rates, algicide, etc). And there aren't lots of great manifolds 
> to

> allow customer (joe-end user) service-able connections (like how many
> folks do you want screwing with DC power supplies/feeds without some
> serious insurance)..
>
> Once some standardization comes to this, and valves are built to detect
> leaks, etc... things will be good.
>
> DJ
>


In the long run, I think this is going to solve a lot of problems, as
cooling the equipment with a water medium is more effective then trying 
to

pull the heat off of everything with air. But standardization is going to
take a bit.





Re: Colocation in the US.

2007-01-24 Thread Gary Buhrmaster


Paul Vixie wrote:


i'm spec'ing datacenter space at the moment, so this is topical.  at 10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R.  i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility.  (this makes for a nice-sounding W/R number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are never
opened while the power is on.


If you have water for the racks:
http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame
(there are other vendors too, of course)

The CRAY bid for the DARPA contract also has some interesting
cooling solutions as I recall, but that is a longer way out.




Re: Colocation in the US.

2007-01-24 Thread Gary Buhrmaster


Brandon Galbraith wrote:

On 1/24/07, Mike Lyon <[EMAIL PROTECTED]> wrote:


I think if someone finds a workable non-conductive cooling fluid that
would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting, flooding
and electricuting everyone and everything.

-Mike



http://en.wikipedia.org/wiki/Mineral_oil



http://www.spraycool.com/technology/index.asp


Re: Colocation in the US.

2007-01-24 Thread Brandon Galbraith

On 1/24/07, Deepak Jain <[EMAIL PROTECTED]> wrote:




Speaking as the operator of at least one datacenter that was originally
built to water cool mainframes... Water is not hard to deal with, but it
has its own discipline, especially when you are dealing with lots of it
(flow rates, algicide, etc). And there aren't lots of great manifolds to
allow customer (joe-end user) service-able connections (like how many
folks do you want screwing with DC power supplies/feeds without some
serious insurance)..

Once some standardization comes to this, and valves are built to detect
leaks, etc... things will be good.

DJ




In the long run, I think this is going to solve a lot of problems, as
cooling the equipment with a water medium is more effective then trying to
pull the heat off of everything with air. But standardization is going to
take a bit.


Re: Colocation in the US.

2007-01-24 Thread Brandon Galbraith

On 1/24/07, Mike Lyon <[EMAIL PROTECTED]> wrote:


I think if someone finds a workable non-conductive cooling fluid that
would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting, flooding
and electricuting everyone and everything.

-Mike



http://en.wikipedia.org/wiki/Mineral_oil


Re: Colocation in the US.

2007-01-24 Thread Mike Lyon


I think if someone finds a workable non-conductive cooling fluid that
would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting, flooding
and electricuting everyone and everything.

-Mike


On 1/24/07, Brandon Galbraith <[EMAIL PROTECTED]> wrote:

On 1/24/07, Deepak Jain <[EMAIL PROTECTED]> wrote:
>
>
> Speaking as the operator of at least one datacenter that was originally
> built to water cool mainframes... Water is not hard to deal with, but it
> has its own discipline, especially when you are dealing with lots of it
> (flow rates, algicide, etc). And there aren't lots of great manifolds to
> allow customer (joe-end user) service-able connections (like how many
> folks do you want screwing with DC power supplies/feeds without some
> serious insurance)..
>
> Once some standardization comes to this, and valves are built to detect
> leaks, etc... things will be good.
>
> DJ
>


In the long run, I think this is going to solve a lot of problems, as
cooling the equipment with a water medium is more effective then trying to
pull the heat off of everything with air. But standardization is going to
take a bit.



Re: Colocation in the US.

2007-01-24 Thread Deepak Jain



Speaking as the operator of at least one datacenter that was originally 
built to water cool mainframes... Water is not hard to deal with, but it 
has its own discipline, especially when you are dealing with lots of it 
(flow rates, algicide, etc). And there aren't lots of great manifolds to 
allow customer (joe-end user) service-able connections (like how many 
folks do you want screwing with DC power supplies/feeds without some 
serious insurance)..


Once some standardization comes to this, and valves are built to detect 
leaks, etc... things will be good.


DJ

Mike Lyon wrote:


Paul brings up a good point. How long before we call a colo provider
to provision a rack, power, bandwidth and a to/from connection in each
rack to their water cooler on the roof?

-Mike


Re: Colocation in the US.

2007-01-24 Thread Tony Varriale


Vendor S? :)

tv
- Original Message - 
From: "JC Dill" <[EMAIL PROTECTED]>

Cc: 
Sent: Tuesday, January 23, 2007 4:11 PM
Subject: Re: Colocation in the US.




Robert Sherrard wrote:


Who's getting more than 10kW per cabinet and metered power from their 
colo provider?


I had a data center tour on Sunday where they said that the way they 
provide space is by power requirements.  You state your power 
requirements, they give you enough rack/cabinet space to *properly* 
house gear that consumers that much power.  If your gear is particularly 
compact then you will end up with more space than strictly necessary.


It's a good way of looking at the problem, since the flipside of power 
consumption is the cooling problem.  Too many servers packed in a small 
space (rack or cabinet) becomes a big cooling problem.


jc




Re: Colocation in the US.

2007-01-24 Thread Tony Varriale


The current high watt cooling technologies are definately more expensive 
(much more).  Also, a facility would still need traditional forced to 
maintain the building climate.


tv
- Original Message - 
From: "Todd Glassey" <[EMAIL PROTECTED]>

To: "Tony Varriale" <[EMAIL PROTECTED]>; 
Sent: Wednesday, January 24, 2007 2:09 PM
Subject: Re: Colocation in the US.


If the cooling is cheaper than the cost of the A/C or provides a backup, 
its a no brainer.


Todd Glassey


-Original Message-

From: Tony Varriale <[EMAIL PROTECTED]>
Sent: Jan 24, 2007 11:20 AM
To: nanog@merit.edu
Subject: Re: Colocation in the US.


I think the better questions are: when will customers be willing to pay 
for

it?  and how much? :)

tv
- Original Message - 
From: "Mike Lyon" <[EMAIL PROTECTED]>

To: "Paul Vixie" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, January 24, 2007 11:54 AM
Subject: Re: Colocation in the US.




Paul brings up a good point. How long before we call a colo provider
to provision a rack, power, bandwidth and a to/from connection in each
rack to their water cooler on the roof?

-Mike

On 24 Jan 2007 17:37:27 +, Paul Vixie <[EMAIL PROTECTED]> wrote:


[EMAIL PROTECTED] (david raistrick) writes:

> > I had a data center tour on Sunday where they said that the way 
> > they

> > provide space is by power requirements.  You state your power
> > requirements, they give you enough rack/cabinet space to *properly*
> > house gear that consumers that
>
> "properly" is open for debate here.  ...  It's possible to have a
> facility built to properly power and cool 10kW+ per rack.  Just that
> most
> colo facilties aren't built to that level.

i'm spec'ing datacenter space at the moment, so this is topical.  at
10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase
sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R.  i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility.  (this makes for a nice-sounding W/R
number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are
never
opened while the power is on.

you can pay over here, or you can pay over there, but TANSTAAFL.  for 
my

own purposes, this means averaging ~6kW/R with some hotter and some
colder, and cooling at ~200W/SF (which is ~30SF/R).  the thing that's
burning me right now is that for every watt i deliver, i've got to burn 
a

watt in the mechanical to cool it all.  i still want the rackmount
server/router/switch industry to move to liquid which is about 70% more
efficient (in the mechanical) than air as a cooling medium.

> > It's a good way of looking at the problem, since the flipside of
> > power
> > consumption is the cooling problem.  Too many servers packed in a
> > small
> > space (rack or cabinet) becomes a big cooling problem.
>
> Problem yes, but one that is capable of being engineered around 
> (who'd

> have ever though we could get 1000Mb/s through cat5, after all!)

i think we're going to see a more Feinman-like circuit design where 
we're
not dumping electrons every time we change states, and before that 
we'll

see a standardized gozinta/gozoutta liquid cooling hookup for rackmount
equipment, and before that we're already seeing Intel and AMD in a
watts-per-computron race.  all of that would happen before we'd 
air-cool

more than 200W/SF in the average datacenter, unless Eneco's chip works
out
in which case all bets are off in a whole lotta ways.
--
Paul Vixie









Re: Colocation in the US.

2007-01-24 Thread Tony Varriale


I think the better questions are: when will customers be willing to pay for 
it?  and how much? :)


tv
- Original Message - 
From: "Mike Lyon" <[EMAIL PROTECTED]>

To: "Paul Vixie" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, January 24, 2007 11:54 AM
Subject: Re: Colocation in the US.




Paul brings up a good point. How long before we call a colo provider
to provision a rack, power, bandwidth and a to/from connection in each
rack to their water cooler on the roof?

-Mike

On 24 Jan 2007 17:37:27 +, Paul Vixie <[EMAIL PROTECTED]> wrote:


[EMAIL PROTECTED] (david raistrick) writes:

> > I had a data center tour on Sunday where they said that the way they
> > provide space is by power requirements.  You state your power
> > requirements, they give you enough rack/cabinet space to *properly*
> > house gear that consumers that
>
> "properly" is open for debate here.  ...  It's possible to have a
> facility built to properly power and cool 10kW+ per rack.  Just that 
> most

> colo facilties aren't built to that level.

i'm spec'ing datacenter space at the moment, so this is topical.  at 
10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase 
sf/R

by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R.  i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility.  (this makes for a nice-sounding W/R 
number.)

i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are 
never

opened while the power is on.

you can pay over here, or you can pay over there, but TANSTAAFL.  for my
own purposes, this means averaging ~6kW/R with some hotter and some
colder, and cooling at ~200W/SF (which is ~30SF/R).  the thing that's
burning me right now is that for every watt i deliver, i've got to burn a
watt in the mechanical to cool it all.  i still want the rackmount
server/router/switch industry to move to liquid which is about 70% more
efficient (in the mechanical) than air as a cooling medium.

> > It's a good way of looking at the problem, since the flipside of 
> > power
> > consumption is the cooling problem.  Too many servers packed in a 
> > small

> > space (rack or cabinet) becomes a big cooling problem.
>
> Problem yes, but one that is capable of being engineered around (who'd
> have ever though we could get 1000Mb/s through cat5, after all!)

i think we're going to see a more Feinman-like circuit design where we're
not dumping electrons every time we change states, and before that we'll
see a standardized gozinta/gozoutta liquid cooling hookup for rackmount
equipment, and before that we're already seeing Intel and AMD in a
watts-per-computron race.  all of that would happen before we'd air-cool
more than 200W/SF in the average datacenter, unless Eneco's chip works 
out

in which case all bets are off in a whole lotta ways.
--
Paul Vixie





Re: Colocation in the US.

2007-01-24 Thread Mike Lyon


Paul brings up a good point. How long before we call a colo provider
to provision a rack, power, bandwidth and a to/from connection in each
rack to their water cooler on the roof?

-Mike

On 24 Jan 2007 17:37:27 +, Paul Vixie <[EMAIL PROTECTED]> wrote:


[EMAIL PROTECTED] (david raistrick) writes:

> > I had a data center tour on Sunday where they said that the way they
> > provide space is by power requirements.  You state your power
> > requirements, they give you enough rack/cabinet space to *properly*
> > house gear that consumers that
>
> "properly" is open for debate here.  ...  It's possible to have a
> facility built to properly power and cool 10kW+ per rack.  Just that most
> colo facilties aren't built to that level.

i'm spec'ing datacenter space at the moment, so this is topical.  at 10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R.  i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility.  (this makes for a nice-sounding W/R number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are never
opened while the power is on.

you can pay over here, or you can pay over there, but TANSTAAFL.  for my
own purposes, this means averaging ~6kW/R with some hotter and some
colder, and cooling at ~200W/SF (which is ~30SF/R).  the thing that's
burning me right now is that for every watt i deliver, i've got to burn a
watt in the mechanical to cool it all.  i still want the rackmount
server/router/switch industry to move to liquid which is about 70% more
efficient (in the mechanical) than air as a cooling medium.

> > It's a good way of looking at the problem, since the flipside of power
> > consumption is the cooling problem.  Too many servers packed in a small
> > space (rack or cabinet) becomes a big cooling problem.
>
> Problem yes, but one that is capable of being engineered around (who'd
> have ever though we could get 1000Mb/s through cat5, after all!)

i think we're going to see a more Feinman-like circuit design where we're
not dumping electrons every time we change states, and before that we'll
see a standardized gozinta/gozoutta liquid cooling hookup for rackmount
equipment, and before that we're already seeing Intel and AMD in a
watts-per-computron race.  all of that would happen before we'd air-cool
more than 200W/SF in the average datacenter, unless Eneco's chip works out
in which case all bets are off in a whole lotta ways.
--
Paul Vixie



Re: Colocation in the US.

2007-01-24 Thread Paul Vixie

[EMAIL PROTECTED] (david raistrick) writes:

> > I had a data center tour on Sunday where they said that the way they
> > provide space is by power requirements.  You state your power
> > requirements, they give you enough rack/cabinet space to *properly*
> > house gear that consumers that
> 
> "properly" is open for debate here.  ...  It's possible to have a
> facility built to properly power and cool 10kW+ per rack.  Just that most
> colo facilties aren't built to that level.

i'm spec'ing datacenter space at the moment, so this is topical.  at 10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R.  i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility.  (this makes for a nice-sounding W/R number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are never
opened while the power is on.

you can pay over here, or you can pay over there, but TANSTAAFL.  for my
own purposes, this means averaging ~6kW/R with some hotter and some
colder, and cooling at ~200W/SF (which is ~30SF/R).  the thing that's
burning me right now is that for every watt i deliver, i've got to burn a
watt in the mechanical to cool it all.  i still want the rackmount
server/router/switch industry to move to liquid which is about 70% more
efficient (in the mechanical) than air as a cooling medium.

> > It's a good way of looking at the problem, since the flipside of power
> > consumption is the cooling problem.  Too many servers packed in a small
> > space (rack or cabinet) becomes a big cooling problem.
> 
> Problem yes, but one that is capable of being engineered around (who'd 
> have ever though we could get 1000Mb/s through cat5, after all!)

i think we're going to see a more Feinman-like circuit design where we're
not dumping electrons every time we change states, and before that we'll
see a standardized gozinta/gozoutta liquid cooling hookup for rackmount
equipment, and before that we're already seeing Intel and AMD in a
watts-per-computron race.  all of that would happen before we'd air-cool
more than 200W/SF in the average datacenter, unless Eneco's chip works out
in which case all bets are off in a whole lotta ways.
-- 
Paul Vixie


Re: Colocation in the US.

2007-01-23 Thread david raistrick


On Tue, 23 Jan 2007, JC Dill wrote:

I had a data center tour on Sunday where they said that the way they provide 
space is by power requirements.  You state your power requirements, they give 
you enough rack/cabinet space to *properly* house gear that consumers that


"properly" is open for debate here.  It just mean their facility isn't 
built to handle the power-per-square-foot loads that were being asked 
about.


It's possible to have a facility built to properly power and cool 10kW+ 
per rack.   Just that most colo facilties aren't built to that level.


It's a good way of looking at the problem, since the flipside of power 
consumption is the cooling problem.  Too many servers packed in a small space 
(rack or cabinet) becomes a big cooling problem.


Problem yes, but one that is capable of being engineered around (who'd 
have ever though we could get 1000Mb/s through cat5, after all!)




---
david raistrickhttp://www.netmeister.org/news/learn2quote.html
[EMAIL PROTECTED] http://www.expita.com/nomime.html



Re: Colocation in the US.

2007-01-23 Thread JC Dill


Robert Sherrard wrote:


Who's getting more than 10kW per cabinet and metered power from their 
colo provider?


I had a data center tour on Sunday where they said that the way they 
provide space is by power requirements.  You state your power 
requirements, they give you enough rack/cabinet space to *properly* 
house gear that consumers that much power.  If your gear is particularly 
compact then you will end up with more space than strictly necessary.


It's a good way of looking at the problem, since the flipside of power 
consumption is the cooling problem.  Too many servers packed in a small 
space (rack or cabinet) becomes a big cooling problem.


jc